Parallel Index Scans

Started by Amit Kapilaover 9 years ago89 messageshackers
Jump to latest
#1Amit Kapila
amit.kapila16@gmail.com

As of now, the driving table for parallel query is accessed by
parallel sequential scan which limits its usage to a certain degree.
Parallelising index scans would further increase the usage of parallel
query in many more cases. This patch enables the parallelism for the
btree scans. Supporting parallel index scan for other index types
like hash, gist, spgist can be done as separate patches.

The basic idea is quite similar to parallel heap scans which is that
each worker (including leader whenever possible) will scan a block and
then get the next block that is required to be scan. The parallelism
in implemented at the leaf level of a btree. The first worker to
start a btree scan will scan till leaf and others will wait till the
first worker has reached till leaf. The first worker after reading
the leaf block will set the next block to be read and wake the first
worker waiting to scan the next block and proceed with scanning tuples
from the block it has read, similarly each worker after reading the
block, sets the next block to be read and wakes up the first waiting
worker. This is achieved by using the condition variable patch [1]/messages/by-id/CAEepm=0zshYwB6wDeJCkrRJeoBM=jPYBe+-k_VtKRU_8zMLEfA@mail.gmail.com
proposed by Robert. Parallelism is supported for both forward and
backward scans.

The optimizer will choose the parallelism based on number of pages in
index relation and cpu cost for evaluating the rows is divided equally
among workers. Index Scan node is made parallel aware and can be used
beneath Gather as shown below:

Current Plan for Index Scans
----------------------------------------
Index Scan using idx2 on test (cost=0.42..7378.96 rows=2433 width=29)
Index Cond: (c < 10)

Parallel version of plan
----------------------------------
Gather (cost=1000.42..1243.40 rows=2433 width=29)
Workers Planned: 1
-> Parallel Index Scan using idx2 on test (cost=0.42..0.10
rows=1431 width=29)
Index Cond: (c < 10)

The Parallel index scans can be used in parallelising aggregate
queries as well. For example, given a query like: select count(*)
from t1 where c1 > 1000 and c1 < 1100 and c2='aaa' Group By c2; below
form of parallel plans are possible:

Finalize HashAggregate
Group Key: c2
-> Gather
Workers Planned: 1
-> Partial HashAggregate
Group Key: c2
-> Parallel Index Scan using idx_t1_partial on t1
Index Cond: ((c1 > 1000) AND (c1 < 1100))
Filter: (c2 = 'aaa'::bpchar)

OR

Finalize GroupAggregate
Group Key: c2
-> Sort
-> Gather
Workers Planned: 1
-> Partial GroupAggregate
Group Key: c2
-> Parallel Index Scan using idx_t1_partial on t1
Index Cond: ((c1 > 1000) AND (c1 < 1100))
Filter: (c2 = 'aaa'::bpchar)

In the second plan (GroupAggregate), the Sort + Gather step would be
replaced with GatherMerge, once we have a GatherMerge node as proposed
by Rushabh [2]/messages/by-id/CAGPqQf09oPX-cQRpBKS0Gq49Z+m6KBxgxd_p9gX8CKk_d75HoQ@mail.gmail.com. Note, that above examples are just taken to explain
the usage of parallel index scan, actual plans will be selected based
on cost.

Performance tests
----------------------------
This test has been performed on community m/c (hydra, POWER-7).

Initialize pgbench with 3000 scale factor (./pgbench -i -s 3000 postgres)

Count the rows in pgbench_accounts based on values of aid and bid

Serial plan
------------------
set max_parallel_workers_per_gather=0;

postgres=# explain analyze select count(aid) from pgbench_accounts
where aid > 1000 and aid < 90000000 and bid > 800 and bid < 900;

QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------------------------------------------------
----
Aggregate (cost=4714590.52..4714590.53 rows=1 width=8) (actual
time=35684.425..35684.425 rows=1 loops=1)
-> Index Scan using pgbench_accounts_pkey on pgbench_accounts
(cost=0.57..4707458.12 rows=2852961 width=4) (actual
time=29210.743..34385.271 rows=9900000 loops
=1)
Index Cond: ((aid > 1000) AND (aid < 90000000))
Filter: ((bid > 800) AND (bid < 900))
Rows Removed by Filter: 80098999
Planning time: 0.183 ms
Execution time: 35684.459 ms
(7 rows)

Parallel Plan
-------------------
set max_parallel_workers_per_gather=2;

postgres=# explain analyze select count(aid) from pgbench_accounts
where aid > 1000 and aid < 90000000 and bid > 800 and bid < 900;

QUERY PLAN

------------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------
Finalize Aggregate (cost=3924773.13..3924773.14 rows=1 width=8)
(actual time=15033.105..15033.105 rows=1 loops=1)
-> Gather (cost=3924772.92..3924773.12 rows=2 width=8) (actual
time=15032.986..15033.093 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (cost=3923772.92..3923772.92 rows=1
width=8) (actual time=15030.354..15030.354 rows=1 loops=3)
-> Parallel Index Scan using pgbench_accounts_pkey on
pgbench_accounts (cost=0.57..3920801.08 rows=1188734 width=4) (actual
time=12476.068..14600.410 rows=3300000 loops=3)
Index Cond: ((aid > 1000) AND (aid < 90000000))
Filter: ((bid > 800) AND (bid < 900))
Rows Removed by Filter: 26699666
Planning time: 0.244 ms
Execution time: 15036.081 ms
(11 rows)

The above is a median of 3 runs, all the runs gave almost same
execution time. Here, we can notice that execution time is reduced by
more than half with two workers and I have tested with four workers
where time is reduced to one-fourth (9128.420 ms) of serial plan. I
think these results are quite similar to what we got for parallel
sequential scans. Another thing to note is that parallelising index
scans are more beneficial if there is a Filter which removes many rows
fetched from Index Scan or if the Filter is costly (example - filter
contains costly function execution). This observation is also quite
similar to what we have observed with Parallel Sequential Scans.

I think we can parallelise Index Only Scans as well, but I have not
evaluated the same and certainly it can be done as a separate patch in
future.

Contributions
--------------------
First patch (parallel_index_scan_v1.patch) implements parallelism at
IndexAM level - Rahila Syed and Amit Kapila based on design inputs and
suggestions by Robert Haas
Second patch (parallel_index_opt_exec_support_v1.patch) provides
optimizer and executor support for parallel index scans - Amit Kapila

The order to use these patches is first apply condition variable patch
[1]: /messages/by-id/CAEepm=0zshYwB6wDeJCkrRJeoBM=jPYBe+-k_VtKRU_8zMLEfA@mail.gmail.com
parallel_index_opt_exec_support_v1.patch

Thoughts?

[1]: /messages/by-id/CAEepm=0zshYwB6wDeJCkrRJeoBM=jPYBe+-k_VtKRU_8zMLEfA@mail.gmail.com
[2]: /messages/by-id/CAGPqQf09oPX-cQRpBKS0Gq49Z+m6KBxgxd_p9gX8CKk_d75HoQ@mail.gmail.com

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Attachments:

parallel_index_scan_v1.patchapplication/octet-stream; name=parallel_index_scan_v1.patchDownload+749-91
parallel_index_opt_exec_support_v1.patchapplication/octet-stream; name=parallel_index_opt_exec_support_v1.patchDownload+340-36
#2Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#1)
Re: Parallel Index Scans

On Thu, Oct 13, 2016 at 8:48 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

As of now, the driving table for parallel query is accessed by
parallel sequential scan which limits its usage to a certain degree.
Parallelising index scans would further increase the usage of parallel
query in many more cases. This patch enables the parallelism for the
btree scans. Supporting parallel index scan for other index types
like hash, gist, spgist can be done as separate patches.

I would like to have an input on the method of selecting parallel
workers for scanning index. Currently the patch selects number of
workers based on size of index relation and the upper limit of
parallel workers is max_parallel_workers_per_gather. This is quite
similar to what we do for parallel sequential scan except for the fact
that in parallel seq. scan, we use the parallel_workers option if
provided by user during Create Table. User can provide
parallel_workers option as below:

Create Table .... With (parallel_workers = 4);

Is it desirable to have similar option for parallel index scans, if
yes then what should be the interface for same? One possible way
could be to allow user to provide it during Create Index as below:

Create Index .... With (parallel_workers = 4);

If above syntax looks sensible, then we might need to think what
should be used for parallel index build. It seems to me that parallel
tuple sort patch [1]/messages/by-id/CAM3SWZTmkOFEiCDpUNaO4n9-1xcmWP-1NXmT7h0Pu3gM2YuHvg@mail.gmail.com proposed by Peter G. is using above syntax for
getting the parallel workers input from user for parallel index
builds.

Another point which needs some thoughts is whether it is good idea to
use index relation size to calculate parallel workers for index scan.
I think ideally for index scans it should be based on number of pages
to be fetched/scanned from index.

[1]: /messages/by-id/CAM3SWZTmkOFEiCDpUNaO4n9-1xcmWP-1NXmT7h0Pu3gM2YuHvg@mail.gmail.com

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Rahila Syed
rahilasyed90@gmail.com
In reply to: Amit Kapila (#2)
Re: Parallel Index Scans

Another point which needs some thoughts is whether it is good idea to
use index relation size to calculate parallel workers for index scan.
I think ideally for index scans it should be based on number of pages
to be fetched/scanned from index.

IIUC, its not possible to know the exact number of pages scanned from an
index
in advance.
What we are essentially making parallel is the scan of the leaf pages.
So it will make sense to have the number of workers based on number of leaf
pages.
Having said that, I think it will not make much difference as compared to
existing method because
currently total index pages are used to calculate the number of workers. As
far as I understand,in large indexes, the difference between
number of leaf pages and total pages is not significant. In other words,
internal pages form a small fraction of total pages.
Also the calculation is based on log of number of pages so it will make
even lesser difference.

Thank you,
Rahila Syed

On Tue, Oct 18, 2016 at 8:38 AM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

Show quoted text

On Thu, Oct 13, 2016 at 8:48 AM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

As of now, the driving table for parallel query is accessed by
parallel sequential scan which limits its usage to a certain degree.
Parallelising index scans would further increase the usage of parallel
query in many more cases. This patch enables the parallelism for the
btree scans. Supporting parallel index scan for other index types
like hash, gist, spgist can be done as separate patches.

I would like to have an input on the method of selecting parallel
workers for scanning index. Currently the patch selects number of
workers based on size of index relation and the upper limit of
parallel workers is max_parallel_workers_per_gather. This is quite
similar to what we do for parallel sequential scan except for the fact
that in parallel seq. scan, we use the parallel_workers option if
provided by user during Create Table. User can provide
parallel_workers option as below:

Create Table .... With (parallel_workers = 4);

Is it desirable to have similar option for parallel index scans, if
yes then what should be the interface for same? One possible way
could be to allow user to provide it during Create Index as below:

Create Index .... With (parallel_workers = 4);

If above syntax looks sensible, then we might need to think what
should be used for parallel index build. It seems to me that parallel
tuple sort patch [1] proposed by Peter G. is using above syntax for
getting the parallel workers input from user for parallel index
builds.

Another point which needs some thoughts is whether it is good idea to
use index relation size to calculate parallel workers for index scan.
I think ideally for index scans it should be based on number of pages
to be fetched/scanned from index.

[1] - /messages/by-id/CAM3SWZTmkOFEiCDpUNaO4n9-
1xcmWP-1NXmT7h0Pu3gM2YuHvg%40mail.gmail.com

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Amit Kapila
amit.kapila16@gmail.com
In reply to: Rahila Syed (#3)
Re: Parallel Index Scans

On Tue, Oct 18, 2016 at 4:08 PM, Rahila Syed <rahilasyed90@gmail.com> wrote:

Another point which needs some thoughts is whether it is good idea to
use index relation size to calculate parallel workers for index scan.
I think ideally for index scans it should be based on number of pages
to be fetched/scanned from index.

IIUC, its not possible to know the exact number of pages scanned from an
index
in advance.

We can't find the exact numbers of index pages to be scanned, but I
think we can find estimated number of pages to be fetched (refer
cost_index).

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Amit Kapila (#2)
Re: Parallel Index Scans

On Mon, Oct 17, 2016 at 8:08 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

Create Index .... With (parallel_workers = 4);

If above syntax looks sensible, then we might need to think what
should be used for parallel index build. It seems to me that parallel
tuple sort patch [1] proposed by Peter G. is using above syntax for
getting the parallel workers input from user for parallel index
builds.

Apparently you see a similar issue with other major database systems,
where similar storage parameter things are kind of "overloaded" like
this (they are used by both index creation, and by the optimizer in
considering whether it should use a parallel index scan). That can be
a kind of a gotcha for their users, but maybe it's still worth it. In
any case, the complaints I saw about that were from users who used
parallel CREATE INDEX with the equivalent of my parallel_workers index
storage parameter, and then unexpectedly found this also forced the
use of parallel index scan. Not the other way around.

Ideally, the parallel_workers storage parameter will rarely be
necessary because the optimizer will generally do the right thing in
all case.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Geoghegan (#5)
Re: Parallel Index Scans

On Thu, Oct 20, 2016 at 7:39 AM, Peter Geoghegan <pg@heroku.com> wrote:

On Mon, Oct 17, 2016 at 8:08 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

Create Index .... With (parallel_workers = 4);

If above syntax looks sensible, then we might need to think what
should be used for parallel index build. It seems to me that parallel
tuple sort patch [1] proposed by Peter G. is using above syntax for
getting the parallel workers input from user for parallel index
builds.

Apparently you see a similar issue with other major database systems,
where similar storage parameter things are kind of "overloaded" like
this (they are used by both index creation, and by the optimizer in
considering whether it should use a parallel index scan). That can be
a kind of a gotcha for their users, but maybe it's still worth it.

I have also checked and found that you are right. In SQL Server, they
are using max degree of parallelism (MAXDOP) parameter which is I
think is common for all the sql statements.

In
any case, the complaints I saw about that were from users who used
parallel CREATE INDEX with the equivalent of my parallel_workers index
storage parameter, and then unexpectedly found this also forced the
use of parallel index scan. Not the other way around.

I can understand that it can be confusing to users, so other option
could be to provide separate parameters like parallel_workers_build
and parallel_workers where first can be used for index build and
second can be used for scan. My personal opinion is to have one
parameter, so that users have one less thing to learn about
parallelism.

Ideally, the parallel_workers storage parameter will rarely be
necessary because the optimizer will generally do the right thing in
all case.

Yeah, we can choose not to provide any parameter for parallel index
scans, but some users might want to have a parameter similar to
parallel table scans, so it could be handy for them to use.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Amit Kapila (#6)
Re: Parallel Index Scans

On Wed, Oct 19, 2016 at 8:07 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

I have also checked and found that you are right. In SQL Server, they
are using max degree of parallelism (MAXDOP) parameter which is I
think is common for all the sql statements.

It's not just that one that does things this way, for what it's worth.

I can understand that it can be confusing to users, so other option
could be to provide separate parameters like parallel_workers_build
and parallel_workers where first can be used for index build and
second can be used for scan. My personal opinion is to have one
parameter, so that users have one less thing to learn about
parallelism.

That's my first instinct too, but I don't really have an opinion yet.

I think that this is the kind of thing where it could make sense to
take a "wait and see" approach, and then make a firm decision
immediately prior to beta. This is what we did in deciding the name of
and fine details around what ultimately became the
max_parallel_workers_per_gather GUC (plus related GUCs and storage
parameters).

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#6)
Re: Parallel Index Scans

On Wed, Oct 19, 2016 at 11:07 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

Ideally, the parallel_workers storage parameter will rarely be
necessary because the optimizer will generally do the right thing in
all case.

Yeah, we can choose not to provide any parameter for parallel index
scans, but some users might want to have a parameter similar to
parallel table scans, so it could be handy for them to use.

I think the parallel_workers reloption should override the degree of
parallelism for any sort of parallel scan on that table. Had I
intended it to apply only to sequential scans, I would have named it
differently.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#8)
Re: Parallel Index Scans

On Thu, Oct 20, 2016 at 10:33 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Wed, Oct 19, 2016 at 11:07 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

Ideally, the parallel_workers storage parameter will rarely be
necessary because the optimizer will generally do the right thing in
all case.

Yeah, we can choose not to provide any parameter for parallel index
scans, but some users might want to have a parameter similar to
parallel table scans, so it could be handy for them to use.

I think the parallel_workers reloption should override the degree of
parallelism for any sort of parallel scan on that table. Had I
intended it to apply only to sequential scans, I would have named it
differently.

I think there is big difference of size of relation to scan between
parallel sequential scan and parallel (range) index scan which could
make it difficult for user to choose the value of this parameter. Why
do you think that the parallel_workers reloption should suffice all
type of scans for a table? I could only think of providing it based
on thinking that lesser config knobs makes life easier.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#10Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#9)
Re: Parallel Index Scans

On Fri, Oct 21, 2016 at 9:27 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

I think the parallel_workers reloption should override the degree of
parallelism for any sort of parallel scan on that table. Had I
intended it to apply only to sequential scans, I would have named it
differently.

I think there is big difference of size of relation to scan between
parallel sequential scan and parallel (range) index scan which could
make it difficult for user to choose the value of this parameter. Why
do you think that the parallel_workers reloption should suffice all
type of scans for a table? I could only think of providing it based
on thinking that lesser config knobs makes life easier.

Well, we could do that, but it would be fairly complicated and it
doesn't seem to me to be the right place to focus our efforts. I'd
rather try to figure out some way to make the planner smarter, because
even if users can override the number of workers on a
per-table-per-scan-type basis, they're probably still going to find
using parallel query pretty frustrating unless we make the
number-of-workers formula smarter than it is today. Anyway, even if
we do decide to add more reloptions than just parallel_degree someday,
couldn't that be left for a separate patch?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#10)
Re: Parallel Index Scans

On Fri, Oct 21, 2016 at 10:55 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Fri, Oct 21, 2016 at 9:27 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

I think the parallel_workers reloption should override the degree of
parallelism for any sort of parallel scan on that table. Had I
intended it to apply only to sequential scans, I would have named it
differently.

I think there is big difference of size of relation to scan between
parallel sequential scan and parallel (range) index scan which could
make it difficult for user to choose the value of this parameter. Why
do you think that the parallel_workers reloption should suffice all
type of scans for a table? I could only think of providing it based
on thinking that lesser config knobs makes life easier.

Well, we could do that, but it would be fairly complicated and it
doesn't seem to me to be the right place to focus our efforts. I'd
rather try to figure out some way to make the planner smarter, because
even if users can override the number of workers on a
per-table-per-scan-type basis, they're probably still going to find
using parallel query pretty frustrating unless we make the
number-of-workers formula smarter than it is today. Anyway, even if
we do decide to add more reloptions than just parallel_degree someday,
couldn't that be left for a separate patch?

That makes sense to me. As of now, patch doesn't consider reloptions
for parallel index scans. So, I think we can leave it as it is and
then later as a separate patch decide whether to use reloption of
table or a separate reloption for index would be better choice.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#11)
Re: Parallel Index Scans

On Sat, Oct 22, 2016 at 9:07 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Oct 21, 2016 at 10:55 PM, Robert Haas <robertmhaas@gmail.com> wrote:

I have rebased the patch (parallel_index_scan_v2) based on latest
commit e8ac886c (condition variables). I have removed the usage of
ConditionVariablePrepareToSleep as that is is no longer mandatory. I
have also updated docs for wait event introduced by this patch (thanks
to Dilip for noticing it). There is no change in
parallel_index_opt_exec_support patch, but just attaching here for
easier reference.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Attachments:

parallel_index_scan_v2.patchapplication/octet-stream; name=parallel_index_scan_v2.patchDownload+742-92
parallel_index_opt_exec_support_v2.patchapplication/octet-stream; name=parallel_index_opt_exec_support_v2.patchDownload+340-36
#13Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Amit Kapila (#12)
Re: Parallel Index Scans

On Sat, Nov 26, 2016 at 10:35 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Sat, Oct 22, 2016 at 9:07 AM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Fri, Oct 21, 2016 at 10:55 PM, Robert Haas <robertmhaas@gmail.com>

wrote:

I have rebased the patch (parallel_index_scan_v2) based on latest
commit e8ac886c (condition variables). I have removed the usage of
ConditionVariablePrepareToSleep as that is is no longer mandatory. I
have also updated docs for wait event introduced by this patch (thanks
to Dilip for noticing it). There is no change in
parallel_index_opt_exec_support patch, but just attaching here for
easier reference.

Moved to next CF with "needs review" status.

Regards,
Hari Babu
Fujitsu Australia

#14Rafia Sabih
rafia.sabih@enterprisedb.com
In reply to: Haribabu Kommi (#13)
Re: Parallel Index Scans

Hello,
On evaluating parallel index scans on TPC-H benchmark queries, I came
across some interesting results.

For scale factor 20, queries 4, 6 and 14 are giving significant performance
improvements with parallel index:
Q | Head | PI
4 | 14 | 11
6 | 27 | 9
14 | 20 | 12

To confirm that the proposed patch is scalable I tested it on 300 scale
factor, there some queries switched to bitmap index scan instead of
parallel index, but there were other queries giving significant improvement
in performance:
Q | Head | PI
4 | 207 | 168
14 | 2662 | 1576
15 | 847 | 190

All the performance numbers given above are in seconds. The experimental
setup used in this exercise is as follows:
Server parameter settings:
work_mem = 64 MB,
max_parallel_workers_per_gather = 4,
random_page_cost = seq_page_cost = 0.1 = parallel_tuple_cost,
shared_buffers = 1 GB

Logical schema: Some additional indexes were created to ensure the use of
indexes,
on lineitem table -- l_shipdate, l_returnflag, l_shipmode,
on orders table -- o_comment, o_orderdate, and
on customer table -- c_mktsegment.

Machine used: IBM Power, 4 socket machine, 512 GB RAM

Main observations about the utility and power of this patch includes
availability of appropriate indexes, giving suitable value of
random_page_cost based on the RAM and DB sizes. E.g. in these
experimentation I ensured warm cache environment, hence giving a higher
value to random_page_cost than seq_page_cost does not makes much sense and
it would inhibit the use of indexes. Also, the value of this parameter
needs to be calibrated based on the underlying hardware, there is a recent
work in this direction that gives a mechanism to do this calibration
offline, also they experimented with Postgresql parameters [1]http://pages.cs.wisc.edu/~wentaowu/papers/prediction-full.pdf.

Please find the attached file for have a look on these results in detail.
The file pi_perf_tpch.ods gives the performance numbers and the graphs for
both the scale factors. Attached zip folder gives the explain analyse
output for these queries on both head as well as with parallel index patch.

[1]: http://pages.cs.wisc.edu/~wentaowu/papers/prediction-full.pdf

On Mon, Dec 5, 2016 at 10:36 AM, Haribabu Kommi <kommi.haribabu@gmail.com>
wrote:

On Sat, Nov 26, 2016 at 10:35 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Sat, Oct 22, 2016 at 9:07 AM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Fri, Oct 21, 2016 at 10:55 PM, Robert Haas <robertmhaas@gmail.com>

wrote:

I have rebased the patch (parallel_index_scan_v2) based on latest
commit e8ac886c (condition variables). I have removed the usage of
ConditionVariablePrepareToSleep as that is is no longer mandatory. I
have also updated docs for wait event introduced by this patch (thanks
to Dilip for noticing it). There is no change in
parallel_index_opt_exec_support patch, but just attaching here for
easier reference.

Moved to next CF with "needs review" status.

Regards,
Hari Babu
Fujitsu Australia

--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/

Attachments:

PI_perf_tpch.odsapplication/vnd.oasis.opendocument.spreadsheet; name=PI_perf_tpch.odsDownload
PI_plans.zipapplication/zip; name=PI_plans.zipDownload
#15Anastasia Lubennikova
a.lubennikova@postgrespro.ru
In reply to: Amit Kapila (#12)
Re: Parallel Index Scans

The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation: tested, passed

Hi, thank you for the patch.
Results are very promising. Do you see any drawbacks of this feature or something that requires more testing?
I'm willing to oo a review. I hadn't do benchmarks yet, but I've read the patch and here are some
notes and questions about it.

I saw the discussion about parameters in the thread above. And I agree that we'd better concentrate
on the patch itself and add them later if necessary.

1. Can't we simply use "if (scan->parallel_scan != NULL)" instead of xs_temp_snap flag?

+	if (scan->xs_temp_snap)
+		UnregisterSnapshot(scan->xs_snapshot);

I must say that I'm quite new with all this parallel stuff. If you give me a link,
where to read about snapshots for parallel workers, my review will be more helpful.
Anyway, it would be great to have more comments about it in the code.

2. Don't you mind to rename 'amestimateparallelscan' to let's say 'amparallelscan_spacerequired'
or something like this? As far as I understand there is nothing to estimate, we know this size
for sure. I guess that you've chosen this name because of 'heap_parallelscan_estimate'.
But now it looks similar to 'amestimate' which refers to indexscan cost for optimizer.
That leads to the next question.

3. Are there any changes in cost estimation? I didn't find related changes in the patch.
Parallel scan is expected to be faster and optimizer definitely should know that.

4. + uint8 ps_pageStatus; /* state of scan, see below */
There is no desciption below. I'd make the comment more helpful:
/* state of scan. See possible flags values in nbtree.h */
And why do you call it pageStatus? What does it have to do with page?

5. Comment for _bt_parallel_seize() says:
"False indicates that we have reached the end of scan for
current scankeys and for that we return block number as P_NONE."

What is the reason to check (blkno == P_NONE) after checking (status == false)
in _bt_first() (see code below)? If comment is correct
we'll never reach _bt_parallel_done()

+		blkno = _bt_parallel_seize(scan, &status);
+		if (status == false)
+		{
+			BTScanPosInvalidate(so->currPos);
+			return false;
+		}
+		else if (blkno == P_NONE)
+		{
+			_bt_parallel_done(scan);
+			BTScanPosInvalidate(so->currPos);
+			return false;
+		}

6. To avoid code duplication, I would wrap this into the function

+	/* initialize moreLeft/moreRight appropriately for scan direction */
+	if (ScanDirectionIsForward(dir))
+	{
+		so->currPos.moreLeft = false;
+		so->currPos.moreRight = true;
+	}
+	else
+	{
+		so->currPos.moreLeft = true;
+		so->currPos.moreRight = false;
+	}
+	so->numKilled = 0;			/* just paranoia */
+	so->markItemIndex = -1;		/* ditto */

And after that we can also get rid of _bt_parallel_readpage() which only
bring another level of indirection to the code.

7. Just a couple of typos I've noticed:

* Below flags are used indicate the state of parallel scan.
* Below flags are used TO indicate the state of parallel scan.

* On success, release lock and pin on buffer on success.
* On success release lock and pin on buffer.

8. I didn't find a description of the feature in documentation.
Probably we need to add a paragraph to the "Parallel Query" chapter.

I will send another review of performance until the end of the week.

The new status of this patch is: Waiting on Author

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Robert Haas
robertmhaas@gmail.com
In reply to: Anastasia Lubennikova (#15)
Re: Parallel Index Scans

Thanks for reviewing! A few quick thoughts from me since I write a
bunch of the design for this patch.

On Wed, Dec 21, 2016 at 10:16 AM, Anastasia Lubennikova
<lubennikovaav@gmail.com> wrote:

1. Can't we simply use "if (scan->parallel_scan != NULL)" instead of xs_temp_snap flag?

+       if (scan->xs_temp_snap)
+               UnregisterSnapshot(scan->xs_snapshot);

I must say that I'm quite new with all this parallel stuff. If you give me a link,
where to read about snapshots for parallel workers, my review will be more helpful.
Anyway, it would be great to have more comments about it in the code.

I suspect it would be better to keep those two things formally
separate, even though they may always be the same right now.

2. Don't you mind to rename 'amestimateparallelscan' to let's say 'amparallelscan_spacerequired'
or something like this? As far as I understand there is nothing to estimate, we know this size
for sure. I guess that you've chosen this name because of 'heap_parallelscan_estimate'.
But now it looks similar to 'amestimate' which refers to indexscan cost for optimizer.
That leads to the next question.

"estimate" is being used this way quite widely now, in places like
ExecParallelEstimate. So if we're going to change the terminology we
should do it broadly.

3. Are there any changes in cost estimation? I didn't find related changes in the patch.
Parallel scan is expected to be faster and optimizer definitely should know that.

Generally the way that's reflected in the optimized is by having the
parallel scan have a lower row count. See cost_seqscan() for an
example.

In general, you'll probably find a lot of parallels between this patch
and ee7ca559fcf404f9a3bd99da85c8f4ea9fbc2e92, which is probably a good
thing.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#17Amit Kapila
amit.kapila16@gmail.com
In reply to: Anastasia Lubennikova (#15)
Re: Parallel Index Scans

On Wed, Dec 21, 2016 at 8:46 PM, Anastasia Lubennikova
<lubennikovaav@gmail.com> wrote:

The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation: tested, passed

Hi, thank you for the patch.
Results are very promising. Do you see any drawbacks of this feature or something that requires more testing?

I think you can focus on the handling of array scan keys for testing.
In general, one of my colleagues has shown interest in testing this
patch and I think he has tested as well but never posted his findings.
I will request him to share his findings and what kind of tests he has
done, if any.

I'm willing to oo a review.

Thanks, that will be helpful.

I saw the discussion about parameters in the thread above. And I agree that we'd better concentrate
on the patch itself and add them later if necessary.

1. Can't we simply use "if (scan->parallel_scan != NULL)" instead of xs_temp_snap flag?

+       if (scan->xs_temp_snap)
+               UnregisterSnapshot(scan->xs_snapshot);

I agree with what Rober has told in his reply. We do same way for
heap, refer heap_endscan().

I must say that I'm quite new with all this parallel stuff. If you give me a link,
where to read about snapshots for parallel workers, my review will be more helpful.

You can read transam/README.parallel. Refer "State Sharing" portion
of README to learn more about it.

Anyway, it would be great to have more comments about it in the code.

We are sharing snapshot to ensure that reads in both master backend
and worker backend can use the same snapshot. There is no harm in
adding comments, but I think it is better to be consistent with
similar heapam code. After reading README.parallel, if you still feel
that we should add more comments in the code, then we can definitely
do that.

2. Don't you mind to rename 'amestimateparallelscan' to let's say 'amparallelscan_spacerequired'
or something like this?

Sure, I am open to other names, but IMHO, lets keep "estimate" in the
name to keep it consistent with other parallel stuff. Refer
execParallel.c to see how widely this word is used.

As far as I understand there is nothing to estimate, we know this size
for sure. I guess that you've chosen this name because of 'heap_parallelscan_estimate'.
But now it looks similar to 'amestimate' which refers to indexscan cost for optimizer.
That leads to the next question.

Do you mean 'amcostestimate'? If you want we can rename it
amparallelscanestimate to be consistent with amcostestimate.

3. Are there any changes in cost estimation?

Yes.

I didn't find related changes in the patch.
Parallel scan is expected to be faster and optimizer definitely should know that.

You can find the relavant changes in
parallel_index_opt_exec_support_v2.patch, refer cost_index().

4. + uint8 ps_pageStatus; /* state of scan, see below */
There is no desciption below. I'd make the comment more helpful:
/* state of scan. See possible flags values in nbtree.h */

makes sense. Will change.

And why do you call it pageStatus? What does it have to do with page?

During scan this tells us whether next page is available for scan.
Another option could be to name it as scanStatus, but not sure if that
is better. Do you think if we add a comment like "indicates whether
next page is available for scan" for this variable then it will be
clear?

5. Comment for _bt_parallel_seize() says:
"False indicates that we have reached the end of scan for
current scankeys and for that we return block number as P_NONE."

What is the reason to check (blkno == P_NONE) after checking (status == false)
in _bt_first() (see code below)? If comment is correct
we'll never reach _bt_parallel_done()

+               blkno = _bt_parallel_seize(scan, &status);
+               if (status == false)
+               {
+                       BTScanPosInvalidate(so->currPos);
+                       return false;
+               }
+               else if (blkno == P_NONE)
+               {
+                       _bt_parallel_done(scan);
+                       BTScanPosInvalidate(so->currPos);
+                       return false;
+               }

The first time master backend or worker hits last page (calls this
API), it will return P_NONE and after that any worker tries to fetch
next page, it will return status as false. I think we can expand a
comment to explain it clearly. Let me know, if you need more
clarification, I can explain it in detail.

6. To avoid code duplication, I would wrap this into the function

+       /* initialize moreLeft/moreRight appropriately for scan direction */
+       if (ScanDirectionIsForward(dir))
+       {
+               so->currPos.moreLeft = false;
+               so->currPos.moreRight = true;
+       }
+       else
+       {
+               so->currPos.moreLeft = true;
+               so->currPos.moreRight = false;
+       }
+       so->numKilled = 0;                      /* just paranoia */
+       so->markItemIndex = -1;         /* ditto */

Okay, I think we can write a separate function (probably inline
function) for above.

And after that we can also get rid of _bt_parallel_readpage() which only
bring another level of indirection to the code.

See, this function is responsible for multiple actions like
initializing moreLeft/moreRight positions, reading next page, dropping
the lock/pin. So replicating all these actions in the caller will
make the code in caller less readable as compared to now. Consider
this point and let me know your view on same.

7. Just a couple of typos I've noticed:

* Below flags are used indicate the state of parallel scan.
* Below flags are used TO indicate the state of parallel scan.

* On success, release lock and pin on buffer on success.
* On success release lock and pin on buffer.

Will fix.

8. I didn't find a description of the feature in documentation.
Probably we need to add a paragraph to the "Parallel Query" chapter.

Yes, I am aware of that and I think it makes sense to add it now
rather than waiting until the end.

I will send another review of performance until the end of the week.

Okay, you can refer Rafia's mail above for non-default settings she
has used in her performance tests with TPC-H.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#18Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#17)
Re: Parallel Index Scans

On Thu, Dec 22, 2016 at 9:49 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Dec 21, 2016 at 8:46 PM, Anastasia Lubennikova
<lubennikovaav@gmail.com> wrote:

The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation: tested, passed

Hi, thank you for the patch.
Results are very promising. Do you see any drawbacks of this feature or something that requires more testing?

I think you can focus on the handling of array scan keys for testing.
In general, one of my colleagues has shown interest in testing this
patch and I think he has tested as well but never posted his findings.
I will request him to share his findings and what kind of tests he has
done, if any.

I'm willing to oo a review.

Thanks, that will be helpful.

I saw the discussion about parameters in the thread above. And I agree that we'd better concentrate
on the patch itself and add them later if necessary.

1. Can't we simply use "if (scan->parallel_scan != NULL)" instead of xs_temp_snap flag?

+       if (scan->xs_temp_snap)
+               UnregisterSnapshot(scan->xs_snapshot);

I agree with what Rober has told in his reply.

Typo.
/Rober/Robert Haas

Thanks to Michael Paquier for noticing it and informing me offline.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#19tushar
tushar.ahuja@enterprisedb.com
In reply to: Amit Kapila (#17)
Re: Parallel Index Scans

On 12/22/2016 09:49 AM, Amit Kapila wrote:

I think you can focus on the handling of array scan keys for testing.
In general, one of my colleagues has shown interest in testing this
patch and I think he has tested as well but never posted his findings.
I will request him to share his findings and what kind of tests he has
done, if any.

Sure, We (Prabhat and I) have done some testing for this feature
internally but never published the test-scripts on this forum. PFA the
sql scripts ( along with the expected .out files) we have used for
testing for your ready reference.

In addition we had generated the LCOV (code coverage) report and
compared the files which are changed for the "Parallel index scan" patch.
You can see the numbers for "with patch" V/s "Without patch" (.pdf
file is attached)

--
regards,tushar

Attachments:

lcov_report_compare.pdfapplication/pdf; name=lcov_report_compare.pdfDownload
pis_testcases.sqltext/x-sql; name=pis_testcases.sqlDownload
pis_testcases.outtext/plain; charset=UTF-8; name=pis_testcases.outDownload
#20tushar
tushar.ahuja@enterprisedb.com
In reply to: tushar (#19)
Re: Parallel Index Scans

On 12/22/2016 01:35 PM, tushar wrote:

On 12/22/2016 09:49 AM, Amit Kapila wrote:

I think you can focus on the handling of array scan keys for testing.
In general, one of my colleagues has shown interest in testing this
patch and I think he has tested as well but never posted his findings.
I will request him to share his findings and what kind of tests he has
done, if any.

Sure, We (Prabhat and I) have done some testing for this feature
internally but never published the test-scripts on this forum. PFA the
sql scripts ( along with the expected .out files) we have used for
testing for your ready reference.

In addition we had generated the LCOV (code coverage) report and
compared the files which are changed for the "Parallel index scan" patch.
You can see the numbers for "with patch" V/s "Without patch" (.pdf
file is attached)

In addition to that, we run the sqlsmith against PG v10+PIS (parallel
index scan) patches and found a crash but that is coming on plain PG
v10 (without applying any patches) as well

postgres=# select
70 as c0,
pg_catalog.has_server_privilege(
cast(ref_0.indexdef as text),
cast(cast(coalesce((select name from pg_catalog.pg_settings
limit 1 offset 16)
,
null) as text) as text)) as c1,
pg_catalog.pg_export_snapshot() as c2,
ref_0.indexdef as c3,
ref_0.indexname as c4
from
pg_catalog.pg_indexes as ref_0
where (ref_0.tablespace = ref_0.tablespace)
or (46 = 22)
limit 103;
TRAP: FailedAssertion("!(keylen < 64)", File: "hashfunc.c", Line: 139)
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: 2016-12-23
11:19:50.627 IST [2314] LOG: server process (PID 2322) was terminated
by signal 6: Aborted
2016-12-23 11:19:50.627 IST [2314] DETAIL: Failed process was running:
select
70 as c0,
pg_catalog.has_server_privilege(
cast(ref_0.indexdef as text),
cast(cast(coalesce((select name from
pg_catalog.pg_settings limit 1 offset 16)
,
null) as text) as text)) as c1,
pg_catalog.pg_export_snapshot() as c2,
ref_0.indexdef as c3,
ref_0.indexname as c4
from
pg_catalog.pg_indexes as ref_0
where (ref_0.tablespace = ref_0.tablespace)
or (46 = 22)
limit 103;
2016-12-23 11:19:50.627 IST [2314] LOG: terminating any other active
server processes
2016-12-23 11:19:50.627 IST [2319] WARNING: terminating connection
because of crash of another server process
2016-12-23 11:19:50.627 IST [2319] DETAIL: The postmaster has commanded
this server process to roll back the current transaction and exit,
because another server process exited abnormally and possibly corrupted
shared memory.
2016-12-23 11:19:50.627 IST [2319] HINT: In a moment you should be able
to reconnect to the database and repeat your command.
2016-12-23 11:19:50.629 IST [2323] FATAL: the database system is in
recovery mode
Failed.
!> 2016-12-23 11:19:50.629 IST [2314] LOG: all server processes
terminated; reinitializing
2016-12-23 11:19:50.658 IST [2324] LOG: database system was
interrupted; last known up at 2016-12-23 11:19:47 IST
2016-12-23 11:19:50.810 IST [2324] LOG: database system was not
properly shut down; automatic recovery in progress
2016-12-23 11:19:50.812 IST [2324] LOG: invalid record length at
0/155E408: wanted 24, got 0
2016-12-23 11:19:50.812 IST [2324] LOG: redo is not required
2016-12-23 11:19:50.819 IST [2324] LOG: MultiXact member wraparound
protections are now enabled
2016-12-23 11:19:50.822 IST [2314] LOG: database system is ready to
accept connections
2016-12-23 11:19:50.822 IST [2328] LOG: autovacuum launcher started

--
regards,tushar

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#21Robert Haas
robertmhaas@gmail.com
In reply to: tushar (#20)
#22tushar
tushar.ahuja@enterprisedb.com
In reply to: Robert Haas (#21)
#23Rahila Syed
rahilasyed90@gmail.com
In reply to: Amit Kapila (#17)
#24Anastasia Lubennikova
a.lubennikova@postgrespro.ru
In reply to: Amit Kapila (#17)
#25Amit Kapila
amit.kapila16@gmail.com
In reply to: Anastasia Lubennikova (#24)
#26Amit Kapila
amit.kapila16@gmail.com
In reply to: Rahila Syed (#23)
#27Anastasia Lubennikova
a.lubennikova@postgrespro.ru
In reply to: Amit Kapila (#25)
#28Robert Haas
robertmhaas@gmail.com
In reply to: Anastasia Lubennikova (#27)
#29Amit Kapila
amit.kapila16@gmail.com
In reply to: Anastasia Lubennikova (#27)
#30Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#28)
#31Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#30)
#32Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Amit Kapila (#30)
#33Rahila Syed
rahilasyed90@gmail.com
In reply to: Haribabu Kommi (#32)
#34Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#31)
#35Amit Kapila
amit.kapila16@gmail.com
In reply to: Haribabu Kommi (#32)
#36Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#34)
#37Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#30)
#38Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Rahila Syed (#33)
#39Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#31)
#40Amit Kapila
amit.kapila16@gmail.com
In reply to: Haribabu Kommi (#32)
#41Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#37)
#42Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#39)
#43Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#34)
#44Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Amit Kapila (#35)
#45Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#42)
#46Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#43)
#47Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#46)
#48Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#47)
#49Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#48)
#50Amit Kapila
amit.kapila16@gmail.com
In reply to: Haribabu Kommi (#44)
#51Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#49)
#52Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Amit Kapila (#50)
#53Amit Kapila
amit.kapila16@gmail.com
In reply to: Haribabu Kommi (#52)
#54Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#49)
#55Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#54)
#56Rahila Syed
rahilasyed90@gmail.com
In reply to: Amit Kapila (#55)
#57Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#51)
#58Amit Kapila
amit.kapila16@gmail.com
In reply to: Rahila Syed (#56)
#59Rahila Syed
rahilasyed90@gmail.com
In reply to: Robert Haas (#57)
#60Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#57)
#61tushar
tushar.ahuja@enterprisedb.com
In reply to: Amit Kapila (#60)
#62tushar
tushar.ahuja@enterprisedb.com
In reply to: Amit Kapila (#60)
#63Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#60)
#64Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#58)
#65Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#64)
#66Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#65)
In reply to: Amit Kapila (#66)
#68Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Geoghegan (#67)
#69Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#68)
#70Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#60)
#71Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#66)
#72Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#70)
#73Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#72)
#74Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#73)
#75Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#70)
#76Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#74)
#77Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#76)
#78Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#77)
#79Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#78)
#80Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#79)
#81Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#80)
#82Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#80)
#83Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#82)
#84Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#83)
#85Michael Banck
michael.banck@credativ.de
In reply to: Amit Kapila (#84)
#86Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Banck (#85)
#87Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#86)
#88Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#87)
#89Gavin Flower
GavinFlower@archidevsys.co.nz
In reply to: Amit Kapila (#88)