Page Scan Mode in Hash Index

Started by Ashutosh Sharmaabout 9 years ago57 messageshackers
Jump to latest
#1Ashutosh Sharma
ashu.coek88@gmail.com

Hi All,

Currently, Hash Index scan works tuple-at-a-time, i.e. for every
qualifying tuple in a page, it acquires and releases the lock which
eventually increases the lock/unlock traffic. For example, if an index
page contains 100 qualified tuples, the current hash index scan has to
acquire and release the lock 100 times to read those qualified tuples
which is not good from performance perspective and it also impacts
concurency with VACUUM.

Considering above points, I would like to propose a patch that allows
hash index scan to work in page-at-a-time mode. In page-at-a-time
mode, once lock is acquired on a target bucket page, the entire page
is scanned and all the qualified tuples are saved into backend's local
memory. This reduces the lock/unlock calls for retrieving tuples from
a page. Moreover, it also eliminates the problem of re-finding the
position of the last returned index tuple and more importanly it
allows VACUUM to release lock on current page before moving to the
next page which eventually improves it's concurrency with scan.

Attached patch modifies hash index scan code for page-at-a-time mode.
For better readability, I have splitted it into 3 parts,

1) 0001-Rewrite-hash-index-scans-to-work-a-page-at-a-time.patch: this
patch rewrites the hash index scan module to work in page-at-a-time
mode. It basically introduces two new functions-- _hash_readpage() and
_hash_saveitem(). The former is used to load all the qualifying tuples
from a target bucket or overflow page into an items array. The latter
one is used by _hash_readpage to save all the qualifying tuples found
in a page into an items array. Apart from that, this patch bascially
cleans _hash_first(), _hash_next and hashgettuple().

2) 0002-Remove-redundant-function-_hash_step-and-some-of-the.patch:
this patch basically removes the redundant function _hash_step() and
some of the unused members of HashScanOpaqueData structure.

3) 0003-Improve-locking-startegy-during-VACUUM-in-Hash-Index.patch:
this patch basically improves the locking strategy for VACUUM in hash
index. As the new hash index scan works in page-at-a-time, vacuum can
release the lock on previous page before acquiring a lock on the next
page, hence, improving hash index concurrency.

Please note that, above patches has to be applied on top of following
patches-- 'WAL in hash index - [1]/messages/by-id/CAA4eK1LTyDHyCmj3pf5KxWgPb1DgNae9ivsB5jX0X_Kt7iLTUA@mail.gmail.com' and 'Microvacuum support for hash
index [2]/messages/by-id/CAA4eK1JfrJoa15XStmRKy6mGsVjKh_aa-EXZY+UZQOV6mGM0QQ@mail.gmail.com'. Note that in current head, marking of dead tuples requires
lock on the page. Now, even if hash index scan is done page-at-a-time,
it would still require a a lock on the page just to mark dead tuples.
Hence, loosing the advantage of page-at-a-time mode. Therefore, I
developed this patch over Microvacuum support for hash index [2]/messages/by-id/CAA4eK1JfrJoa15XStmRKy6mGsVjKh_aa-EXZY+UZQOV6mGM0QQ@mail.gmail.com.

I have also done the benchmarking of this patch and would like to
share the results for the same,

Firstly, I have done the benchmarking with non-unique values and i
could see a performance improvement of 4-7%. For the detailed results
please find the attached file 'results-non-unique values-70ff', and
ddl.sql, test.sql are test scripts used in this experimentation. The
detail of non-default GUC params and pgbench command are mentioned in
the result sheet. I also did the benchmarking with unique values at
300 and 1000 scale factor and its results are provided in
'results-unique-values-default-ff'.

[1]: /messages/by-id/CAA4eK1LTyDHyCmj3pf5KxWgPb1DgNae9ivsB5jX0X_Kt7iLTUA@mail.gmail.com
[2]: /messages/by-id/CAA4eK1JfrJoa15XStmRKy6mGsVjKh_aa-EXZY+UZQOV6mGM0QQ@mail.gmail.com

Attachments:

0001-Rewrite-hash-index-scans-to-work-a-page-at-a-time.-T.patchinvalid/octet-stream; name=0001-Rewrite-hash-index-scans-to-work-a-page-at-a-time.-T.patchDownload+379-153
0002-Remove-redundant-function-_hash_step-and-some-of-the.patchinvalid/octet-stream; name=0002-Remove-redundant-function-_hash_step-and-some-of-the.patchDownload+0-222
0003-Improve-locking-startegy-during-VACUUM-in-Hash-Index.patchinvalid/octet-stream; name=0003-Improve-locking-startegy-during-VACUUM-in-Hash-Index.patchDownload+7-7
results-non-unique-values-70ff.xlsxapplication/vnd.openxmlformats-officedocument.spreadsheetml.sheet; name=results-non-unique-values-70ff.xlsxDownload
ddl.sqlapplication/sql; name=ddl.sqlDownload
test.sqlapplication/sql; name=test.sqlDownload
results-unique-values-default-ff.xlsxapplication/vnd.openxmlformats-officedocument.spreadsheetml.sheet; name=results-unique-values-default-ff.xlsxDownload
#2Alexander Korotkov
aekorotkov@gmail.com
In reply to: Ashutosh Sharma (#1)
Re: Page Scan Mode in Hash Index

Hi, Ashutosh!

I've assigned to review this patch.
At first, I'd like to notice that I like idea and general design.
Secondly, patch set don't apply cleanly to master. Please, rebase it.

On Tue, Feb 14, 2017 at 8:27 AM, Ashutosh Sharma <ashu.coek88@gmail.com>
wrote:

1) 0001-Rewrite-hash-index-scans-to-work-a-page-at-a-time.patch: this
patch rewrites the hash index scan module to work in page-at-a-time
mode. It basically introduces two new functions-- _hash_readpage() and
_hash_saveitem(). The former is used to load all the qualifying tuples
from a target bucket or overflow page into an items array. The latter
one is used by _hash_readpage to save all the qualifying tuples found
in a page into an items array. Apart from that, this patch bascially
cleans _hash_first(), _hash_next and hashgettuple().

I see that forward and backward scan cases of function _hash_readpage
contain a lot of code duplication
Could you please refactor this function to have less code duplication?

Also, I wonder if you have a special idea behind inserting data in test.sql
by 1002 separate SQL statements.
INSERT INTO con_hash_index_table (keycol) SELECT a FROM GENERATE_SERIES(1,
1000) a;

You can achieve the same result by execution of single SQL statement.
INSERT INTO con_hash_index_table (keycol) SELECT (a - 1) % 1000 + 1 FROM
GENERATE_SERIES(1, 1002000) a;
Unless you have some special idea of doing this in 1002 separate
transactions.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#3Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Alexander Korotkov (#2)
Re: Page Scan Mode in Hash Index

Hi,

I've assigned to review this patch.
At first, I'd like to notice that I like idea and general design.
Secondly, patch set don't apply cleanly to master. Please, rebase it.

Thanks for showing your interest towards this patch. I would like to
inform that this patch has got dependency on patch for 'Write Ahead
Logging in hash index - [1]/messages/by-id/CAA4eK1KibVzgVETVay0+siVEgzaXnP5R21BdWiK9kg9wx2E40Q@mail.gmail.com' and 'Microvacuum support in hash index
[1]: /messages/by-id/CAA4eK1KibVzgVETVay0+siVEgzaXnP5R21BdWiK9kg9wx2E40Q@mail.gmail.com
on rebasing this patch. However, I will try to share you the updated
patch asap.

On Tue, Feb 14, 2017 at 8:27 AM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:

1) 0001-Rewrite-hash-index-scans-to-work-a-page-at-a-time.patch: this
patch rewrites the hash index scan module to work in page-at-a-time
mode. It basically introduces two new functions-- _hash_readpage() and
_hash_saveitem(). The former is used to load all the qualifying tuples
from a target bucket or overflow page into an items array. The latter
one is used by _hash_readpage to save all the qualifying tuples found
in a page into an items array. Apart from that, this patch bascially
cleans _hash_first(), _hash_next and hashgettuple().

I see that forward and backward scan cases of function _hash_readpage contain a lot of code duplication
Could you please refactor this function to have less code duplication?

Sure, I will try to avoid the code duplication as much as possible.

Also, I wonder if you have a special idea behind inserting data in test.sql by 1002 separate SQL statements.
INSERT INTO con_hash_index_table (keycol) SELECT a FROM GENERATE_SERIES(1, 1000) a;

You can achieve the same result by execution of single SQL statement.
INSERT INTO con_hash_index_table (keycol) SELECT (a - 1) % 1000 + 1 FROM GENERATE_SERIES(1, 1002000) a;
Unless you have some special idea of doing this in 1002 separate transactions.

There is no reason for having so many INSERT statements in test.sql
file. I think it would be better to replace it with single SQL
statement. Thanks.

[1]: /messages/by-id/CAA4eK1KibVzgVETVay0+siVEgzaXnP5R21BdWiK9kg9wx2E40Q@mail.gmail.com
[2]: /messages/by-id/CAE9k0PkRSyzx8dOnokEpUi2A-RFZK72WN0h9DEMv_ut9q6bPRw@mail.gmail.com

--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Ashutosh Sharma (#3)
Re: Page Scan Mode in Hash Index

Hi,

I've assigned to review this patch.
At first, I'd like to notice that I like idea and general design.
Secondly, patch set don't apply cleanly to master. Please, rebase it.

Thanks for showing your interest towards this patch. I would like to
inform that this patch has got dependency on patch for 'Write Ahead
Logging in hash index - [1]' and 'Microvacuum support in hash index
[1]'. Hence, until above two patches becomes stable I may have to keep
on rebasing this patch. However, I will try to share you the updated
patch asap.

On Tue, Feb 14, 2017 at 8:27 AM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:

1) 0001-Rewrite-hash-index-scans-to-work-a-page-at-a-time.patch: this
patch rewrites the hash index scan module to work in page-at-a-time
mode. It basically introduces two new functions-- _hash_readpage() and
_hash_saveitem(). The former is used to load all the qualifying tuples
from a target bucket or overflow page into an items array. The latter
one is used by _hash_readpage to save all the qualifying tuples found
in a page into an items array. Apart from that, this patch bascially
cleans _hash_first(), _hash_next and hashgettuple().

I see that forward and backward scan cases of function _hash_readpage contain a lot of code duplication
Could you please refactor this function to have less code duplication?

Sure, I will try to avoid the code duplication as much as possible.

I had close look into hash_readpage() function and could see that
there are only few if-else conditions which are similar for both
forward and backward scan cases and can't be optimised further.
However, If you have a cursory look into this function definition it
looks like the code for forward and backward scan are exactly the same
but that's not the case. Attached is the diff report
(hash_readpage.html) for forward and backward scan code used in
hash_readpage(). This shows what all lines in the hash_readpage() are
same or different.

Please note that before applying the patches for page scan mode in
hash index you may need to first apply the following patches on HEAD,

1) v10 patch for WAL in hash index - [1]/messages/by-id/CAA4eK1+k5wR4-kAjPqLoKemuHayQd6RkQQT9gheTfpn+72o1UA@mail.gmail.com
2) v1 patch for wal consistency check for hash index - [2]/messages/by-id/CAGz5QCKPU2qX75B1bB_LuEC88xWZa5L55J0TLvYMVD8noSH3pA@mail.gmail.com
3) v6 patch for microvacuum support in hash index - [3]/messages/by-id/CAE9k0PkYpAPDJBfgia08o7XhO8nypH9WoO9M8=dqLrwwObXKcw@mail.gmail.com

[1]: /messages/by-id/CAA4eK1+k5wR4-kAjPqLoKemuHayQd6RkQQT9gheTfpn+72o1UA@mail.gmail.com
[2]: /messages/by-id/CAGz5QCKPU2qX75B1bB_LuEC88xWZa5L55J0TLvYMVD8noSH3pA@mail.gmail.com
[3]: /messages/by-id/CAE9k0PkYpAPDJBfgia08o7XhO8nypH9WoO9M8=dqLrwwObXKcw@mail.gmail.com

--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com

Attachments:

hash_readpage.htmltext/html; charset=US-ASCII; name=hash_readpage.htmlDownload
#5Jesper Pedersen
jesper.pedersen@redhat.com
In reply to: Ashutosh Sharma (#1)
Re: Page Scan Mode in Hash Index

Hi,

On 02/14/2017 12:27 AM, Ashutosh Sharma wrote:

Currently, Hash Index scan works tuple-at-a-time, i.e. for every
qualifying tuple in a page, it acquires and releases the lock which
eventually increases the lock/unlock traffic. For example, if an index
page contains 100 qualified tuples, the current hash index scan has to
acquire and release the lock 100 times to read those qualified tuples
which is not good from performance perspective and it also impacts
concurency with VACUUM.

Considering above points, I would like to propose a patch that allows
hash index scan to work in page-at-a-time mode. In page-at-a-time
mode, once lock is acquired on a target bucket page, the entire page
is scanned and all the qualified tuples are saved into backend's local
memory. This reduces the lock/unlock calls for retrieving tuples from
a page. Moreover, it also eliminates the problem of re-finding the
position of the last returned index tuple and more importanly it
allows VACUUM to release lock on current page before moving to the
next page which eventually improves it's concurrency with scan.

Attached patch modifies hash index scan code for page-at-a-time mode.
For better readability, I have splitted it into 3 parts,

Due to the commits on master these patches applies with hunks.

The README should be updated to mention the use of page scan.

hash.h needs pg_indent.

1) 0001-Rewrite-hash-index-scans-to-work-a-page-at-a-time.patch: this
patch rewrites the hash index scan module to work in page-at-a-time
mode. It basically introduces two new functions-- _hash_readpage() and
_hash_saveitem(). The former is used to load all the qualifying tuples
from a target bucket or overflow page into an items array. The latter
one is used by _hash_readpage to save all the qualifying tuples found
in a page into an items array. Apart from that, this patch bascially
cleans _hash_first(), _hash_next and hashgettuple().

For _hash_next I don't see this - can you explain ?

+ *
+ *             On failure exit (no more tuples), we release pin and set
+ *             so->currPos.buf to InvalidBuffer.

+ * Returns true if any matching items are found else returns false.

s/Returns/Return/g

2) 0002-Remove-redundant-function-_hash_step-and-some-of-the.patch:
this patch basically removes the redundant function _hash_step() and
some of the unused members of HashScanOpaqueData structure.

Looks good.

3) 0003-Improve-locking-startegy-during-VACUUM-in-Hash-Index.patch:
this patch basically improves the locking strategy for VACUUM in hash
index. As the new hash index scan works in page-at-a-time, vacuum can
release the lock on previous page before acquiring a lock on the next
page, hence, improving hash index concurrency.

+ * As the new hash index scan work in page at a time mode,

Remove 'new'.

I have also done the benchmarking of this patch and would like to
share the results for the same,

Firstly, I have done the benchmarking with non-unique values and i
could see a performance improvement of 4-7%. For the detailed results
please find the attached file 'results-non-unique values-70ff', and
ddl.sql, test.sql are test scripts used in this experimentation. The
detail of non-default GUC params and pgbench command are mentioned in
the result sheet. I also did the benchmarking with unique values at
300 and 1000 scale factor and its results are provided in
'results-unique-values-default-ff'.

I'm seeing similar results, and especially with write heavy scenarios.

Best regards,
Jesper

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Jesper Pedersen (#5)
Re: Page Scan Mode in Hash Index

Hi,

Attached patch modifies hash index scan code for page-at-a-time mode.
For better readability, I have splitted it into 3 parts,

Due to the commits on master these patches applies with hunks.

The README should be updated to mention the use of page scan.

Done. Please refer to the attached v2 version of patch.

hash.h needs pg_indent.

Fixed.

1) 0001-Rewrite-hash-index-scans-to-work-a-page-at-a-time.patch: this
patch rewrites the hash index scan module to work in page-at-a-time
mode. It basically introduces two new functions-- _hash_readpage() and
_hash_saveitem(). The former is used to load all the qualifying tuples
from a target bucket or overflow page into an items array. The latter
one is used by _hash_readpage to save all the qualifying tuples found
in a page into an items array. Apart from that, this patch bascially
cleans _hash_first(), _hash_next and hashgettuple().

For _hash_next I don't see this - can you explain ?

Sorry, It was wrongly copied from btree code. I have corrected it now. Please
check the attached v2 verison of patch.

+ *
+ *             On failure exit (no more tuples), we release pin and set
+ *             so->currPos.buf to InvalidBuffer.

+ * Returns true if any matching items are found else returns false.

s/Returns/Return/g

Done.

2) 0002-Remove-redundant-function-_hash_step-and-some-of-the.patch:
this patch basically removes the redundant function _hash_step() and
some of the unused members of HashScanOpaqueData structure.

Looks good.

3) 0003-Improve-locking-startegy-during-VACUUM-in-Hash-Index.patch:
this patch basically improves the locking strategy for VACUUM in hash
index. As the new hash index scan works in page-at-a-time, vacuum can
release the lock on previous page before acquiring a lock on the next
page, hence, improving hash index concurrency.

+ * As the new hash index scan work in page at a time mode,

Remove 'new'.

Done.

I have also done the benchmarking of this patch and would like to
share the results for the same,

Firstly, I have done the benchmarking with non-unique values and i
could see a performance improvement of 4-7%. For the detailed results
please find the attached file 'results-non-unique values-70ff', and
ddl.sql, test.sql are test scripts used in this experimentation. The
detail of non-default GUC params and pgbench command are mentioned in
the result sheet. I also did the benchmarking with unique values at
300 and 1000 scale factor and its results are provided in
'results-unique-values-default-ff'.

I'm seeing similar results, and especially with write heavy scenarios.

Great..!!

--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com

Attachments:

0001-Rewrite-hash-index-scans-to-work-a-page-at-a-timev2.patchapplication/x-patch; name=0001-Rewrite-hash-index-scans-to-work-a-page-at-a-timev2.patchDownload+384-157
0002-Remove-redundant-function-_hash_step-and-some-of-the.patchapplication/x-patch; name=0002-Remove-redundant-function-_hash_step-and-some-of-the.patchDownload+0-222
0003-Improve-locking-startegy-during-VACUUM-in-Hash-Index.patchapplication/x-patch; name=0003-Improve-locking-startegy-during-VACUUM-in-Hash-Index.patchDownload+7-7
#7Jesper Pedersen
jesper.pedersen@redhat.com
In reply to: Ashutosh Sharma (#6)
Re: Page Scan Mode in Hash Index

Hi,

On 03/22/2017 09:32 AM, Ashutosh Sharma wrote:

Done. Please refer to the attached v2 version of patch.

Thanks.

1) 0001-Rewrite-hash-index-scans-to-work-a-page-at-a-time.patch: this
patch rewrites the hash index scan module to work in page-at-a-time
mode. It basically introduces two new functions-- _hash_readpage() and
_hash_saveitem(). The former is used to load all the qualifying tuples
from a target bucket or overflow page into an items array. The latter
one is used by _hash_readpage to save all the qualifying tuples found
in a page into an items array. Apart from that, this patch bascially
cleans _hash_first(), _hash_next and hashgettuple().

0001v2:

In hashgettuple() you can remove the 'currItem' and 'offnum' from the
'else' part, and do the assignment inside

if (so->numKilled < MaxIndexTuplesPerPage)

instead.

No new comments for 0002 and 0003.

Best regards,
Jesper

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Jesper Pedersen (#7)
Re: Page Scan Mode in Hash Index

On Thu, Mar 23, 2017 at 8:29 PM, Jesper Pedersen
<jesper.pedersen@redhat.com> wrote:

Hi,

On 03/22/2017 09:32 AM, Ashutosh Sharma wrote:

Done. Please refer to the attached v2 version of patch.

Thanks.

1) 0001-Rewrite-hash-index-scans-to-work-a-page-at-a-time.patch: this
patch rewrites the hash index scan module to work in page-at-a-time
mode. It basically introduces two new functions-- _hash_readpage() and
_hash_saveitem(). The former is used to load all the qualifying tuples
from a target bucket or overflow page into an items array. The latter
one is used by _hash_readpage to save all the qualifying tuples found
in a page into an items array. Apart from that, this patch bascially
cleans _hash_first(), _hash_next and hashgettuple().

0001v2:

In hashgettuple() you can remove the 'currItem' and 'offnum' from the 'else'
part, and do the assignment inside

if (so->numKilled < MaxIndexTuplesPerPage)

instead.

Done. Please have a look into the attached v3 patch.

No new comments for 0002 and 0003.

okay. Thanks.

--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com

Attachments:

0001-Rewrite-hash-index-scans-to-work-a-page-at-a-timev3.patchapplication/x-patch; name=0001-Rewrite-hash-index-scans-to-work-a-page-at-a-timev3.patchDownload+385-157
#9Jesper Pedersen
jesper.pedersen@redhat.com
In reply to: Ashutosh Sharma (#8)
Re: Page Scan Mode in Hash Index

Hi,

On 03/23/2017 02:11 PM, Ashutosh Sharma wrote:

On Thu, Mar 23, 2017 at 8:29 PM, Jesper Pedersen
<jesper.pedersen@redhat.com> wrote:

0001v2:

In hashgettuple() you can remove the 'currItem' and 'offnum' from the 'else'
part, and do the assignment inside

if (so->numKilled < MaxIndexTuplesPerPage)

instead.

Done. Please have a look into the attached v3 patch.

No new comments for 0002 and 0003.

okay. Thanks.

I'll keep the entry in 'Needs Review' if Alexander, or others, want to
add their feedback.

(Best to post the entire patch series each time)

Best regards,
Jesper

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#10Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Jesper Pedersen (#9)
Re: Page Scan Mode in Hash Index

Hi,

On 03/23/2017 02:11 PM, Ashutosh Sharma wrote:

On Thu, Mar 23, 2017 at 8:29 PM, Jesper Pedersen
<jesper.pedersen@redhat.com> wrote:

0001v2:

In hashgettuple() you can remove the 'currItem' and 'offnum' from the
'else'
part, and do the assignment inside

if (so->numKilled < MaxIndexTuplesPerPage)

instead.

Done. Please have a look into the attached v3 patch.

No new comments for 0002 and 0003.

okay. Thanks.

I'll keep the entry in 'Needs Review' if Alexander, or others, want to add
their feedback.

okay. Thanks.

(Best to post the entire patch series each time)

I take your suggestion. Please find the attachments.

--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com

Attachments:

0001-Rewrite-hash-index-scans-to-work-a-page-at-a-timev3.patchapplication/x-patch; name=0001-Rewrite-hash-index-scans-to-work-a-page-at-a-timev3.patchDownload+385-157
0002-Remove-redundant-function-_hash_step-and-some-of-the.patchapplication/x-patch; name=0002-Remove-redundant-function-_hash_step-and-some-of-the.patchDownload+0-222
0003-Improve-locking-startegy-during-VACUUM-in-Hash-Index.patchapplication/x-patch; name=0003-Improve-locking-startegy-during-VACUUM-in-Hash-Index.patchDownload+7-7
#11Robert Haas
robertmhaas@gmail.com
In reply to: Ashutosh Sharma (#10)
Re: Page Scan Mode in Hash Index

On Thu, Mar 23, 2017 at 2:35 PM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:

I take your suggestion. Please find the attachments.

I think you should consider refactoring this so that it doesn't need
to use goto. Maybe move the while (offnum <= maxoff) logic into a
helper function and have it return itemIndex. If itemIndex == 0, you
can call it again. Notice that the code in the "else" branch of the
if (itemIndex == 0) stuff could actually be removed from the else
block without changing anything, because the "if" block always either
returns or does a goto. That's worthy of a little more work to try to
make things simple and clear.

+ * We find the first item(or, if backward scan, the last item) in

Missing space.

In _hash_dropscanbuf(), the handling of hashso_bucket_buf is now
inconsistent with the handling of hashso_split_bucket_buf, which seems
suspicious. Suppose we enter this function with so->hashso_bucket_buf
and so->currPos.buf both being valid buffers, but not the same one.
It looks to me as if we'll release the first pin but not the second
one. My guess (which could be wrong) is that so->hashso_bucket_buf =
InvalidBuffer should be moved back up higher in the function where it
was before, just after the first if statement, and that the new
condition so->hashso_bucket_buf == so->currPos.buf on the last
_hash_dropbuf() should be removed. If that's not right, then I think
I need somebody to explain why not.

I am suspicious that _hash_kill_items() is going to have problems if
the overflow page is freed before it reacquires the lock.
_btkillitems() contains safeguards against similar cases.

This is not a full review, but I'm out of time for the moment.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Robert Haas (#11)
Re: Page Scan Mode in Hash Index

Hi,

I think you should consider refactoring this so that it doesn't need
to use goto. Maybe move the while (offnum <= maxoff) logic into a
helper function and have it return itemIndex. If itemIndex == 0, you
can call it again.

okay, Added a helper function for _hash_readpage(). Please check v4
patch attached with this mail.

Notice that the code in the "else" branch of the

if (itemIndex == 0) stuff could actually be removed from the else
block without changing anything, because the "if" block always either
returns or does a goto. That's worthy of a little more work to try to
make things simple and clear.

Corrected.

+ * We find the first item(or, if backward scan, the last item) in

Missing space.

Corrected.

In _hash_dropscanbuf(), the handling of hashso_bucket_buf is now
inconsistent with the handling of hashso_split_bucket_buf, which seems
suspicious.

I have corrected it.

Suppose we enter this function with so->hashso_bucket_buf

and so->currPos.buf both being valid buffers, but not the same one.
It looks to me as if we'll release the first pin but not the second
one.

Yes, that is because we no need to release pin on currPos.buf if it is
an overflow page. In page scan mode once we have saved all the
qualified tuples from a current page (could be an overflow page) into
items array we do release both pin and lock on a overflow page. This
was not true earlier, let us assume a case where we are supposed to
fetch only fixed number of tuples from a page using cursor and in such
a case once the number of tuples to be fetched is completed we simply
return with out releasing pin on a page being scanned. If this page is
an overflow page then by end of scan we will end up with pin held on
two buffers i.e. bucket buf and current buf which is basically
overflow buf.

My guess (which could be wrong) is that so->hashso_bucket_buf =

InvalidBuffer should be moved back up higher in the function where it
was before, just after the first if statement, and that the new
condition so->hashso_bucket_buf == so->currPos.buf on the last
_hash_dropbuf() should be removed. If that's not right, then I think
I need somebody to explain why not.

Okay, as i mentioned above, in case of page scan mode we only keep pin
on a bucket buf. There won't be any case where we will be having pin
on overflow buf at the end of scan. So, basically _hash_dropscan buf()
should only attempt to release pin on a bucket buf. And an attempt to
release pin on so->currPos.buf should only happen when
'so->hashso_bucket_buf == so->currPos.buf' or when
'so->hashso_split_bucket_buf == so->currPos.buf'

I am suspicious that _hash_kill_items() is going to have problems if
the overflow page is freed before it reacquires the lock.
_btkillitems() contains safeguards against similar cases.

I have added similar check for hash_kill_items() as well.

This is not a full review, but I'm out of time for the moment.

No worries. I will be ready for your further review comments any time.
Thanks for the review.

--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com

Attachments:

0001-Rewrite-hash-index-scans-to-work-a-page-at-a-timev4.patchapplication/x-patch; name=0001-Rewrite-hash-index-scans-to-work-a-page-at-a-timev4.patchDownload+515-169
0002-Remove-redundant-function-_hash_step-and-some-of-the.patchapplication/x-patch; name=0002-Remove-redundant-function-_hash_step-and-some-of-the.patchDownload+0-222
0003-Improve-locking-startegy-during-VACUUM-in-Hash-Index.patchapplication/x-patch; name=0003-Improve-locking-startegy-during-VACUUM-in-Hash-Index.patchDownload+7-7
#13Jesper Pedersen
jesper.pedersen@redhat.com
In reply to: Ashutosh Sharma (#12)
Re: Page Scan Mode in Hash Index

Hi,

On 03/27/2017 09:34 AM, Ashutosh Sharma wrote:

Hi,

I think you should consider refactoring this so that it doesn't need
to use goto. Maybe move the while (offnum <= maxoff) logic into a
helper function and have it return itemIndex. If itemIndex == 0, you
can call it again.

okay, Added a helper function for _hash_readpage(). Please check v4
patch attached with this mail.

This is not a full review, but I'm out of time for the moment.

No worries. I will be ready for your further review comments any time.
Thanks for the review.

This patch needs a rebase.

Best regards,
Jesper

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Jesper Pedersen (#13)
Re: Page Scan Mode in Hash Index

I think you should consider refactoring this so that it doesn't need

to use goto. Maybe move the while (offnum <= maxoff) logic into a
helper function and have it return itemIndex. If itemIndex == 0, you
can call it again.

okay, Added a helper function for _hash_readpage(). Please check v4
patch attached with this mail.

This is not a full review, but I'm out of time for the moment.

No worries. I will be ready for your further review comments any time.
Thanks for the review.

This patch needs a rebase.

Please try applying these patches on top of [1]/messages/by-id/CAE9k0P=V2LhtyeMXd295fhisp=NWUhRVJ9EZQCDowWiY9rSohQ@mail.gmail.com. I think you should be able
to apply it cleanly. Sorry, I think I forgot to mention this point in my
earlier mail.

[1]: /messages/by-id/CAE9k0P=V2LhtyeMXd295fhisp=NWUhRVJ9EZQCDowWiY9rSohQ@mail.gmail.com
/messages/by-id/CAE9k0P=V2LhtyeMXd295fhisp=NWUhRVJ9EZQCDowWiY9rSohQ@mail.gmail.com

#15Jesper Pedersen
jesper.pedersen@redhat.com
In reply to: Ashutosh Sharma (#14)
Re: Page Scan Mode in Hash Index

Hi Ashutosh,

On 03/29/2017 09:16 PM, Ashutosh Sharma wrote:

This patch needs a rebase.

Please try applying these patches on top of [1]. I think you should be able
to apply it cleanly. Sorry, I think I forgot to mention this point in my
earlier mail.

[1] -
/messages/by-id/CAE9k0P=V2LhtyeMXd295fhisp=NWUhRVJ9EZQCDowWiY9rSohQ@mail.gmail.com

Thanks, that works !

As you have provided a patch for Robert's comments, and no other review
have been posted I'm moving this patch to "Ready for Committer" for
additional committer feedback.

Best regards,
Jesper

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Jesper Pedersen (#15)
Re: Page Scan Mode in Hash Index

Hi,

On 03/29/2017 09:16 PM, Ashutosh Sharma wrote:

This patch needs a rebase.

Please try applying these patches on top of [1]. I think you should be
able
to apply it cleanly. Sorry, I think I forgot to mention this point in my
earlier mail.

[1] -

/messages/by-id/CAE9k0P=V2LhtyeMXd295fhisp=NWUhRVJ9EZQCDowWiY9rSohQ@mail.gmail.com

Thanks, that works !

As you have provided a patch for Robert's comments, and no other review have
been posted I'm moving this patch to "Ready for Committer" for additional
committer feedback.

Please find the attached new version of patches for page scan mode in
hash index rebased on top of - [1]/messages/by-id/CAE9k0P=3rtgUtxopG+XQVxgASFzAnGd9sNmYUaj_=KeVsKGUdA@mail.gmail.com.

[1]: /messages/by-id/CAE9k0P=3rtgUtxopG+XQVxgASFzAnGd9sNmYUaj_=KeVsKGUdA@mail.gmail.com

Attachments:

0001-Rewrite-hash-index-scans-to-work-a-page-at-a-timev5.patchtext/x-patch; charset=US-ASCII; name=0001-Rewrite-hash-index-scans-to-work-a-page-at-a-timev5.patchDownload+509-172
0002-Remove-redundant-function-_hash_step-and-some-of-the.patchtext/x-patch; charset=US-ASCII; name=0002-Remove-redundant-function-_hash_step-and-some-of-the.patchDownload+0-222
0003-Improve-locking-startegy-during-VACUUM-in-Hash-Index.patchtext/x-patch; charset=US-ASCII; name=0003-Improve-locking-startegy-during-VACUUM-in-Hash-Index.patchDownload+7-7
#17Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Sharma (#12)
Re: Page Scan Mode in Hash Index

On Mon, Mar 27, 2017 at 7:04 PM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:

I am suspicious that _hash_kill_items() is going to have problems if
the overflow page is freed before it reacquires the lock.
_btkillitems() contains safeguards against similar cases.

I have added similar check for hash_kill_items() as well.

I think hash_kill_items has a much bigger problem which is that we
can't kill the items if the page has been modified after re-reading
it. Consider a case where Vacuum starts before the Scan on the bucket
and then Scan went ahead (which is possible after your 0003 patch) and
noted the killed items in killed item array and before it could kill
all the items, Vacuum remove those items. Now it is quite possible
that before scan tries to kill those items, the corresponding itemids
have been used by different tuples. I think what we need to do here
is to store the LSN of page first time when we have read the page and
then compare it with current page lsn after re-reading the page in
hash_kill_tems.

*
+ HashScanPosItem items[MaxIndexTuplesPerPage]; /* MUST BE LAST */
+} HashScanPosData;
..
HashScanPosItem *killedItems; /* tids and offset numbers of killed items */
  int numKilled; /* number of currently stored items */
+
+ /*
+ * Identify all the matching items on a page and save them
+ * in HashScanPosData
+ */
+ HashScanPosData currPos; /* current position data */
 } HashScanOpaqueData;

After having array of HashScanPosItems as currPos.items, I think you
don't need array of HashScanPosItem for killedItems, just an integer
array of index in currPos.items should be sufficient.

*
I think below para in README is not valid after this patch.

"To keep concurrency reasonably good, we require readers to cope with
concurrent insertions, which means that they have to be able to
re-find
their current scan position after re-acquiring the buffer content lock
on page. Since deletion is not possible while a reader holds the pin
on bucket, and we assume that heap tuple TIDs are unique, this can be
implemented by searching for the same heap tuple TID previously
returned. Insertion does not move index entries across pages, so the
previously-returned index entry should always be on the same page, at
the same or higher offset number,
as it was before."

*
- next_buf = _hash_getbuf_with_strategy(rel, blkno, HASH_WRITE,
- LH_OVERFLOW_PAGE,
- bstrategy);
-

  /*
- * release the lock on previous page after acquiring the lock on next
- * page
+ * As the hash index scan work in page at a time mode,
+ * vacuum can release the lock on previous page before
+ * acquiring lock on the next page.
  */
..
+ next_buf = _hash_getbuf_with_strategy(rel, blkno, HASH_WRITE,
+   LH_OVERFLOW_PAGE,
+   bstrategy);
+

After this change, you need to modify comments on top of function
hashbucketcleanup() and _hash_squeezebucket().

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#18Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Amit Kapila (#17)
Re: Page Scan Mode in Hash Index

Hi,

Thanks for the review.

I am suspicious that _hash_kill_items() is going to have problems if
the overflow page is freed before it reacquires the lock.
_btkillitems() contains safeguards against similar cases.

I have added similar check for hash_kill_items() as well.

I think hash_kill_items has a much bigger problem which is that we
can't kill the items if the page has been modified after re-reading
it. Consider a case where Vacuum starts before the Scan on the bucket
and then Scan went ahead (which is possible after your 0003 patch) and
noted the killed items in killed item array and before it could kill
all the items, Vacuum remove those items. Now it is quite possible
that before scan tries to kill those items, the corresponding itemids
have been used by different tuples. I think what we need to do here
is to store the LSN of page first time when we have read the page and
then compare it with current page lsn after re-reading the page in
hash_kill_tems.

Okay, understood. I have done the changes to handle this type of
scenario. Please refer to the attached patches. Thanks.

*
+ HashScanPosItem items[MaxIndexTuplesPerPage]; /* MUST BE LAST */
+} HashScanPosData;
..
HashScanPosItem *killedItems; /* tids and offset numbers of killed items */
int numKilled; /* number of currently stored items */
+
+ /*
+ * Identify all the matching items on a page and save them
+ * in HashScanPosData
+ */
+ HashScanPosData currPos; /* current position data */
} HashScanOpaqueData;

After having array of HashScanPosItems as currPos.items, I think you
don't need array of HashScanPosItem for killedItems, just an integer
array of index in currPos.items should be sufficient.

Corrected.

*
I think below para in README is not valid after this patch.

"To keep concurrency reasonably good, we require readers to cope with
concurrent insertions, which means that they have to be able to
re-find
their current scan position after re-acquiring the buffer content lock
on page. Since deletion is not possible while a reader holds the pin
on bucket, and we assume that heap tuple TIDs are unique, this can be
implemented by searching for the same heap tuple TID previously
returned. Insertion does not move index entries across pages, so the
previously-returned index entry should always be on the same page, at
the same or higher offset number,
as it was before."

I have modified above paragraph in README file.

*
- next_buf = _hash_getbuf_with_strategy(rel, blkno, HASH_WRITE,
- LH_OVERFLOW_PAGE,
- bstrategy);
-

/*
- * release the lock on previous page after acquiring the lock on next
- * page
+ * As the hash index scan work in page at a time mode,
+ * vacuum can release the lock on previous page before
+ * acquiring lock on the next page.
*/
..
+ next_buf = _hash_getbuf_with_strategy(rel, blkno, HASH_WRITE,
+   LH_OVERFLOW_PAGE,
+   bstrategy);
+

After this change, you need to modify comments on top of function
hashbucketcleanup() and _hash_squeezebucket().

Done.

Please note that these patches needs to be applied on top of [1]/messages/by-id/CAE9k0P=3rtgUtxopG+XQVxgASFzAnGd9sNmYUaj_=KeVsKGUdA@mail.gmail.com.

[1]: /messages/by-id/CAE9k0P=3rtgUtxopG+XQVxgASFzAnGd9sNmYUaj_=KeVsKGUdA@mail.gmail.com

--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com

Attachments:

0001-Rewrite-hash-index-scans-to-work-a-page-at-a-timev6.patchtext/x-patch; charset=US-ASCII; name=0001-Rewrite-hash-index-scans-to-work-a-page-at-a-timev6.patchDownload+552-190
0002-Remove-redundant-function-_hash_step-and-some-of-the.patchtext/x-patch; charset=US-ASCII; name=0002-Remove-redundant-function-_hash_step-and-some-of-the.patchDownload+0-222
0003-Improve-locking-startegy-during-VACUUM-in-Hash-Index.patchtext/x-patch; charset=US-ASCII; name=0003-Improve-locking-startegy-during-VACUUM-in-Hash-Index.patchDownload+12-16
#19Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Sharma (#18)
Re: Page Scan Mode in Hash Index

On Sun, Apr 2, 2017 at 4:14 AM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:

Please note that these patches needs to be applied on top of [1].

Few more review comments:

1.
- page = BufferGetPage(so->hashso_curbuf);
+ blkno = so->currPos.currPage;
+ if (so->hashso_bucket_buf == so->currPos.buf)
+ {
+ buf = so->currPos.buf;
+ LockBuffer(buf, BUFFER_LOCK_SHARE);
+ page = BufferGetPage(buf);
+ }

Here, you are assuming that only bucket page can be pinned, but I
think that assumption is not right. When _hash_kill_items() is called
before moving to next page, there could be a pin on the overflow page.
You need some logic to check if the buffer is pinned, then just lock
it. I think once you do that, it might not be convinient to release
the pin at the end of this function.

Add some comments on top of _hash_kill_items to explain the new
changes or say some thing like "See _bt_killitems for details"

2.
+ /*
+ * We save the LSN of the page as we read it, so that we know whether it
+ * safe to apply LP_DEAD hints to the page later.  This allows us to drop
+ * the pin for MVCC scans, which allows vacuum to avoid blocking.
+ */
+ so->currPos.lsn = PageGetLSN(page);
+

The second part of above comment doesn't make sense because vacuum's
will still be blocked because we hold the pin on primary bucket page.

3.
+ {
+ /*
+ * No more matching tuples were found. return FALSE
+ * indicating the same. Also, remember the prev and
+ * next block number so that if fetching tuples using
+ * cursor we remember the page from where to start the
+ * scan.
+ */
+ so->currPos.prevPage = (opaque)->hasho_prevblkno;
+ so->currPos.nextPage = (opaque)->hasho_nextblkno;

You can't read opaque without having lock and by this time it has
already been released. Also, I think if you want to save position for
cursor movement, then you need to save the position of last bucket
when scan completes the overflow chain, however as you have written it
will be always invalid block number. I think there is similar problem
with prevblock number.

4.
+_hash_load_qualified_items(IndexScanDesc scan, Page page, OffsetNumber offnum,
+   OffsetNumber maxoff, ScanDirection dir)
+{
+ HashScanOpaque so = (HashScanOpaque) scan->opaque;
+ IndexTuple      itup;
+ int itemIndex;
+
+ if (ScanDirectionIsForward(dir))
+ {
+ /* load items[] in ascending order */
+ itemIndex = 0;
+
+ /* new page, relocate starting position by binary search */
+ offnum = _hash_binsearch(page, so->hashso_sk_hash);

What is the need to find offset number when this function already has
that as an input parameter?

5.
+_hash_load_qualified_items(IndexScanDesc scan, Page page, OffsetNumber offnum,
+   OffsetNumber maxoff, ScanDirection dir)

I think maxoff is not required to be passed a parameter to this
function as you need it only for forward scan. You can compute it
locally.

6.
+_hash_load_qualified_items(IndexScanDesc scan, Page page, OffsetNumber offnum,
+   OffsetNumber maxoff, ScanDirection dir)
{
..
+ if (ScanDirectionIsForward(dir))
+ {
..
+ while (offnum <= maxoff)
{
..
+ if (so->hashso_sk_hash == _hash_get_indextuple_hashkey(itup) &&
+ _hash_checkqual(scan, itup))
+ {
+ /* tuple is qualified, so remember it */
+ _hash_saveitem(so, itemIndex, offnum, itup);
+ itemIndex++;
+ }
+
+ offnum = OffsetNumberNext(offnum);
..

Why are you traversing the whole page even when there is no match?
There is a similar problem in backward scan. I think this is blunder.

7.
+ if (so->currPos.buf == so->hashso_bucket_buf ||
+ so->currPos.buf == so->hashso_split_bucket_buf)
+ {
+ so->currPos.prevPage = InvalidBlockNumber;
+ LockBuffer(so->currPos.buf, BUFFER_LOCK_UNLOCK);
+ }
+ else
+ {
+ so->currPos.prevPage = (opaque)->hasho_prevblkno;
+ _hash_relbuf(rel, so->currPos.buf);
+ }
+
+ so->currPos.nextPage = (opaque)->hasho_nextblkno;

What makes you think it is safe read opaque after releasing the lock?

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#20Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Sharma (#12)
Re: Page Scan Mode in Hash Index

On Mon, Mar 27, 2017 at 7:04 PM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:

My guess (which could be wrong) is that so->hashso_bucket_buf =

InvalidBuffer should be moved back up higher in the function where it
was before, just after the first if statement, and that the new
condition so->hashso_bucket_buf == so->currPos.buf on the last
_hash_dropbuf() should be removed. If that's not right, then I think
I need somebody to explain why not.

Okay, as i mentioned above, in case of page scan mode we only keep pin
on a bucket buf. There won't be any case where we will be having pin
on overflow buf at the end of scan.

What if mark the buffer as invalid after releasing the pin? We are
already doing it that way in btree code, refer
_bt_drop_lock_and_maybe_pin(). I think if we do that way, then we can
do what Robert is suggesting.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#21Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#20)
#22Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Amit Kapila (#19)
#23Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Ashutosh Sharma (#22)
#24Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Ashutosh Sharma (#23)
#25Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Sharma (#24)
#26Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#25)
#27Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Amit Kapila (#25)
#28Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Sharma (#27)
#29Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#28)
#30Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Amit Kapila (#28)
#31Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Sharma (#30)
#32Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Amit Kapila (#31)
#33Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Sharma (#32)
#34Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Amit Kapila (#33)
#35Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Sharma (#34)
#36Jesper Pedersen
jesper.pedersen@redhat.com
In reply to: Amit Kapila (#35)
#37Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Amit Kapila (#35)
#38Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Jesper Pedersen (#36)
#39Jesper Pedersen
jesper.pedersen@redhat.com
In reply to: Ashutosh Sharma (#38)
#40Robert Haas
robertmhaas@gmail.com
In reply to: Jesper Pedersen (#39)
#41Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#40)
#42Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Robert Haas (#40)
#43Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#41)
#44Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#43)
#45Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#44)
#46Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#45)
#47Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#46)
#48Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#47)
#49Robert Haas
robertmhaas@gmail.com
In reply to: Ashutosh Sharma (#42)
#50Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Robert Haas (#49)
#51Robert Haas
robertmhaas@gmail.com
In reply to: Ashutosh Sharma (#50)
#52Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Robert Haas (#51)
#53Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Robert Haas (#51)
#54Robert Haas
robertmhaas@gmail.com
In reply to: Ashutosh Sharma (#53)
#55Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Robert Haas (#54)
#56Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#54)
#57Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#56)