Speed up Clog Access by increasing CLOG buffers

Started by Amit Kapilaover 10 years ago248 messageshackers
Jump to latest
#1Amit Kapila
amit.kapila16@gmail.com

After reducing ProcArrayLock contention in commit
(0e141c0fbb211bdd23783afa731e3eef95c9ad7a), the other lock
which seems to be contentious in read-write transactions is
CLogControlLock. In my investigation, I found that the contention
is mainly due to two reasons, one is that while writing the transaction
status in CLOG (TransactionIdSetPageStatus()), it acquires EXCLUSIVE
CLogControlLock which contends with every other transaction which
tries to access the CLOG for checking transaction status and to reduce it
already a patch [1]/messages/by-id/CANP8+j+imQfHxkChFyfnXDyi6k-arAzRV+ZG-V_OFxEtJjOL2Q@mail.gmail.com is proposed by Simon; Second contention is due to
the reason that when the CLOG page is not found in CLOG buffers, it
needs to acquire CLogControlLock in Exclusive mode which again contends
with shared lockers which tries to access the transaction status.

Increasing CLOG buffers to 64 helps in reducing the contention due to second
reason. Experiments revealed that increasing CLOG buffers only helps
once the contention around ProcArrayLock is reduced.

Performance Data
-----------------------------
RAM - 500GB
8 sockets, 64 cores(Hyperthreaded128 threads total)

Non-default parameters
------------------------------------
max_connections = 300
shared_buffers=8GB
min_wal_size=10GB
max_wal_size=15GB
checkpoint_timeout =35min
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.9
wal_buffers = 256MB

pgbench setup
------------------------
scale factor - 300
Data is on magnetic disk and WAL on ssd.
pgbench -M prepared tpc-b

HEAD - commit 0e141c0f
Patch-1 - increase_clog_bufs_v1

Client Count/Patch_ver 1 8 16 32 64 128 256 HEAD 911 5695 9886 18028 27851
28654 25714 Patch-1 954 5568 9898 18450 29313 31108 28213

This data shows that there is an increase of ~5% at 64 client-count
and 8~10% at more higher clients without degradation at lower client-
count. In above data, there is some fluctuation seen at 8-client-count,
but I attribute that to run-to-run variation, however if anybody has doubts
I can again re-verify the data at lower client counts.

Now if we try to further increase the number of CLOG buffers to 128,
no improvement is seen.

I have also verified that this improvement can be seen only after the
contention around ProcArrayLock is reduced. Below is the data with
Commit before the ProcArrayLock reduction patch. Setup and test
is same as mentioned for previous test.

HEAD - commit 253de7e1
Patch-1 - increase_clog_bufs_v1

Client Count/Patch_ver 128 256 HEAD 16657 10512 Patch-1 16694 10477

I think the benefit of this patch would be more significant along
with the other patch to reduce CLogControlLock contention [1]/messages/by-id/CANP8+j+imQfHxkChFyfnXDyi6k-arAzRV+ZG-V_OFxEtJjOL2Q@mail.gmail.com
(I have not tested both the patches together as still there are
few issues left in the other patch), however it has it's own independent
value, so can be considered separately.

Thoughts?

[1]: /messages/by-id/CANP8+j+imQfHxkChFyfnXDyi6k-arAzRV+ZG-V_OFxEtJjOL2Q@mail.gmail.com
/messages/by-id/CANP8+j+imQfHxkChFyfnXDyi6k-arAzRV+ZG-V_OFxEtJjOL2Q@mail.gmail.com

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Attachments:

increase_clog_bufs_v1.patchapplication/octet-stream; name=increase_clog_bufs_v1.patchDownload+13-9
#2Andres Freund
andres@anarazel.de
In reply to: Amit Kapila (#1)
Re: Speed up Clog Access by increasing CLOG buffers

On 2015-09-01 10:19:19 +0530, Amit Kapila wrote:

pgbench setup
------------------------
scale factor - 300
Data is on magnetic disk and WAL on ssd.
pgbench -M prepared tpc-b

HEAD - commit 0e141c0f
Patch-1 - increase_clog_bufs_v1

Client Count/Patch_ver 1 8 16 32 64 128 256 HEAD 911 5695 9886 18028 27851
28654 25714 Patch-1 954 5568 9898 18450 29313 31108 28213

This data shows that there is an increase of ~5% at 64 client-count
and 8~10% at more higher clients without degradation at lower client-
count. In above data, there is some fluctuation seen at 8-client-count,
but I attribute that to run-to-run variation, however if anybody has doubts
I can again re-verify the data at lower client counts.

Now if we try to further increase the number of CLOG buffers to 128,
no improvement is seen.

I have also verified that this improvement can be seen only after the
contention around ProcArrayLock is reduced. Below is the data with
Commit before the ProcArrayLock reduction patch. Setup and test
is same as mentioned for previous test.

The buffer replacement algorithm for clog is rather stupid - I do wonder
where the cutoff is that it hurts.

Could you perhaps try to create a testcase where xids are accessed that
are so far apart on average that they're unlikely to be in memory? And
then test that across a number of client counts?

There's two reasons that I'd like to see that: First I'd like to avoid
regression, second I'd like to avoid having to bump the maximum number
of buffers by small buffers after every hardware generation...

/*
* Number of shared CLOG buffers.
*
- * Testing during the PostgreSQL 9.2 development cycle revealed that on a
+ * Testing during the PostgreSQL 9.6 development cycle revealed that on a
* large multi-processor system, it was possible to have more CLOG page
- * requests in flight at one time than the number of CLOG buffers which existed
- * at that time, which was hardcoded to 8.  Further testing revealed that
- * performance dropped off with more than 32 CLOG buffers, possibly because
- * the linear buffer search algorithm doesn't scale well.
+ * requests in flight at one time than the number of CLOG buffers which
+ * existed at that time, which was 32 assuming there are enough shared_buffers.
+ * Further testing revealed that either performance stayed same or dropped off
+ * with more than 64 CLOG buffers, possibly because the linear buffer search
+ * algorithm doesn't scale well or some other locking bottlenecks in the
+ * system mask the improvement.
*
- * Unconditionally increasing the number of CLOG buffers to 32 did not seem
+ * Unconditionally increasing the number of CLOG buffers to 64 did not seem
* like a good idea, because it would increase the minimum amount of shared
* memory required to start, which could be a problem for people running very
* small configurations.  The following formula seems to represent a reasonable
* compromise: people with very low values for shared_buffers will get fewer
- * CLOG buffers as well, and everyone else will get 32.
+ * CLOG buffers as well, and everyone else will get 64.
*
* It is likely that some further work will be needed here in future releases;
* for example, on a 64-core server, the maximum number of CLOG requests that
* can be simultaneously in flight will be even larger.  But that will
* apparently require more than just changing the formula, so for now we take
- * the easy way out.
+ * the easy way out.  It could also happen that after removing other locking
+ * bottlenecks, further increase in CLOG buffers can help, but that's not the
+ * case now.
*/

I think the comment should be more drastically rephrased to not
reference individual versions and numbers.

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Andres Freund (#2)
Re: Speed up Clog Access by increasing CLOG buffers

Andres Freund wrote:

The buffer replacement algorithm for clog is rather stupid - I do wonder
where the cutoff is that it hurts.

Could you perhaps try to create a testcase where xids are accessed that
are so far apart on average that they're unlikely to be in memory? And
then test that across a number of client counts?

There's two reasons that I'd like to see that: First I'd like to avoid
regression, second I'd like to avoid having to bump the maximum number
of buffers by small buffers after every hardware generation...

I wonder if it would make sense to explore an idea that has been floated
for years now -- to have pg_clog pages be allocated as part of shared
buffers rather than have their own separate pool. That way, no separate
hardcoded allocation limit is needed. It's probably pretty tricky to
implement, though :-(

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Andres Freund
andres@anarazel.de
In reply to: Alvaro Herrera (#3)
Re: Speed up Clog Access by increasing CLOG buffers

Hi,

On 2015-09-07 10:34:10 -0300, Alvaro Herrera wrote:

I wonder if it would make sense to explore an idea that has been floated
for years now -- to have pg_clog pages be allocated as part of shared
buffers rather than have their own separate pool. That way, no separate
hardcoded allocation limit is needed. It's probably pretty tricky to
implement, though :-(

I still think that'd be a good plan, especially as it'd also let us use
a lot of related infrastructure. I doubt we could just use the standard
cache replacement mechanism though - it's not particularly efficient
either... I also have my doubts that a hash table lookup at every clog
lookup is going to be ok performancewise.

The biggest problem will probably be that the buffer manager is pretty
directly tied to relations and breaking up that bond won't be all that
easy. My guess is that the best bet here is that the easiest way to at
least explore this is to define pg_clog/... as their own tablespaces
(akin to pg_global) and treat the files therein as plain relations.

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Andres Freund (#4)
Re: Speed up Clog Access by increasing CLOG buffers

Andres Freund wrote:

On 2015-09-07 10:34:10 -0300, Alvaro Herrera wrote:

I wonder if it would make sense to explore an idea that has been floated
for years now -- to have pg_clog pages be allocated as part of shared
buffers rather than have their own separate pool. That way, no separate
hardcoded allocation limit is needed. It's probably pretty tricky to
implement, though :-(

I still think that'd be a good plan, especially as it'd also let us use
a lot of related infrastructure. I doubt we could just use the standard
cache replacement mechanism though - it's not particularly efficient
either... I also have my doubts that a hash table lookup at every clog
lookup is going to be ok performancewise.

Yeah. I guess we'd have to mark buffers as unusable for regular pages
("somehow"), and have a separate lookup mechanism. As I said, it is
likely to be tricky.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#3)
Re: Speed up Clog Access by increasing CLOG buffers

On Mon, Sep 7, 2015 at 7:04 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

Andres Freund wrote:

The buffer replacement algorithm for clog is rather stupid - I do wonder
where the cutoff is that it hurts.

Could you perhaps try to create a testcase where xids are accessed that
are so far apart on average that they're unlikely to be in memory?

Yes, I am working on it, what I have in mind is to create a table with
large number of rows (say 50000000) and have each row with different
transaction id. Now each transaction should try to update rows that
are at least 1048576 (number of transactions whose status can be held in
32 CLog buffers) distance apart, that way for each update it will try to
access
Clog page that is not in-memory. Let me know if you can think of any
better or simpler way.

There's two reasons that I'd like to see that: First I'd like to avoid
regression, second I'd like to avoid having to bump the maximum number
of buffers by small buffers after every hardware generation...

I wonder if it would make sense to explore an idea that has been floated
for years now -- to have pg_clog pages be allocated as part of shared
buffers rather than have their own separate pool.

There could be some benefits of it, but I think we still have to acquire
Exclusive lock while committing transaction or while Extending Clog
which are also major sources of contention in this area. I think the
benefits of moving it to shared_buffers could be that the upper limit on
number of pages that can be retained in memory could be increased and even
if we have to replace the page, responsibility to flush it could be
delegated
to checkpoint. So yes, there could be benefits with this idea, but not sure
if they are worth investigating this idea, one thing we could try if you
think
that is beneficial is that just skip fsync during write of clog pages and
if thats
beneficial, then we can think of pushing it to checkpoint (something similar
to what Andres has mentioned on nearby thread).

Yet another way could be to have configuration variable for clog buffers
(Clog_Buffers).

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

#7Robert Haas
robertmhaas@gmail.com
In reply to: Alvaro Herrera (#3)
Re: Speed up Clog Access by increasing CLOG buffers

On Mon, Sep 7, 2015 at 9:34 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:

Andres Freund wrote:

The buffer replacement algorithm for clog is rather stupid - I do wonder
where the cutoff is that it hurts.

Could you perhaps try to create a testcase where xids are accessed that
are so far apart on average that they're unlikely to be in memory? And
then test that across a number of client counts?

There's two reasons that I'd like to see that: First I'd like to avoid
regression, second I'd like to avoid having to bump the maximum number
of buffers by small buffers after every hardware generation...

I wonder if it would make sense to explore an idea that has been floated
for years now -- to have pg_clog pages be allocated as part of shared
buffers rather than have their own separate pool. That way, no separate
hardcoded allocation limit is needed. It's probably pretty tricky to
implement, though :-(

Yeah, I looked at that once and threw my hands up in despair pretty
quickly. I also considered another idea that looked simpler: instead
of giving every SLRU its own pool of pages, have one pool of pages for
all of them, separate from shared buffers but common to all SLRUs.
That looked easier, but still not easy.

I've also considered trying to replace the entire SLRU system with new
code and throwing away what exists today. The locking mode is just
really strange compared to what we do elsewhere. That, too, does not
look all that easy. :-(

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#2)
Re: Speed up Clog Access by increasing CLOG buffers

On Thu, Sep 3, 2015 at 5:11 PM, Andres Freund <andres@anarazel.de> wrote:

On 2015-09-01 10:19:19 +0530, Amit Kapila wrote:

pgbench setup
------------------------
scale factor - 300
Data is on magnetic disk and WAL on ssd.
pgbench -M prepared tpc-b

HEAD - commit 0e141c0f
Patch-1 - increase_clog_bufs_v1

The buffer replacement algorithm for clog is rather stupid - I do wonder
where the cutoff is that it hurts.

Could you perhaps try to create a testcase where xids are accessed that
are so far apart on average that they're unlikely to be in memory? And
then test that across a number of client counts?

Okay, I have tried one such test, but what I could come up with is on an
average every 100th access is a disk access and then tested it with
different number of clog buffers and client count. Below is the result:

Non-default parameters
------------------------------------
max_connections = 300
shared_buffers=32GB
min_wal_size=10GB
max_wal_size=15GB
checkpoint_timeout =35min
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.9
wal_buffers = 256MB
autovacuum=off

HEAD - commit 49124613
Patch-1 - Clog Buffers - 64
Patch-2 - Clog Buffers - 128

Client Count/Patch_ver 1 8 64 128 HEAD 1395 8336 37866 34463 Patch-1 1615
8180 37799 35315 Patch-2 1409 8219 37068 34729
So there is not much difference in test results with different values for
Clog
buffers, probably because the I/O has dominated the test and it shows that
increasing the clog buffers won't regress the current behaviour even though
there are lot more accesses for transaction status outside CLOG buffers.

Now about the test, create a table with large number of rows (say 11617457,
I have tried to create larger, but it was taking too much time (more than a
day))
and have each row with different transaction id. Now each transaction
should
update rows that are at least 1048576 (number of transactions whose status
can
be held in 32 CLog buffers) distance apart, that way ideally for each update
it will
try to access Clog page that is not in-memory, however as the value to
update
is getting selected randomly and that leads to every 100th access as disk
access.

Test
-------
1. Attached file clog_prep.sh should create and populate the required
table and create the function used to access the CLOG pages. You
might want to update the no_of_rows based on the rows you want to
create in table
2. Attached file access_clog_disk.sql is used to execute the function
with random values. You might want to update nrows variable based
on the rows created in previous step.
3. Use pgbench as follows with different client count
./pgbench -c 4 -j 4 -n -M prepared -f "access_clog_disk.sql" -T 300 postgres
4. To ensure that clog access function always accesses same data
during each run, the test ensures to copy the data_directory created by
step-1
before each run.

I have checked by adding some instrumentation that approximately
every 100th access is disk access, attached patch clog_info-v1.patch
adds the necessary instrumentation in code.

As an example, pgbench test yields below results:
./pgbench -c 4 -j 4 -n -M prepared -f "access_clog_disk.sql" -T 180 postgres

LOG: trans_status(3169396)
LOG: trans_status_disk(29546)
LOG: trans_status(3054952)
LOG: trans_status_disk(28291)
LOG: trans_status(3131242)
LOG: trans_status_disk(28989)
LOG: trans_status(3155449)
LOG: trans_status_disk(29347)

Here 'trans_status' is the number of times the process went for accessing
the CLOG status and 'trans_status_disk' is the number of times it went
to disk for accessing CLOG page.

/*
* Number of shared CLOG buffers.
*

I think the comment should be more drastically rephrased to not
reference individual versions and numbers.

Updated comments and the patch (increate_clog_bufs_v2.patch)
containing the same is attached.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Attachments:

increase_clog_bufs_v2.patchapplication/octet-stream; name=increase_clog_bufs_v2.patchDownload+8-15
clog_info-v1.patchapplication/octet-stream; name=clog_info-v1.patchDownload+10-1
clog_prep.shapplication/x-sh; name=clog_prep.shDownload
access_clog_disk.sqlapplication/octet-stream; name=access_clog_disk.sqlDownload
#9Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#8)
Re: Speed up Clog Access by increasing CLOG buffers

On Fri, Sep 11, 2015 at 10:31 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

Could you perhaps try to create a testcase where xids are accessed that
are so far apart on average that they're unlikely to be in memory? And
then test that across a number of client counts?

Now about the test, create a table with large number of rows (say 11617457,
I have tried to create larger, but it was taking too much time (more than a day))
and have each row with different transaction id. Now each transaction should
update rows that are at least 1048576 (number of transactions whose status can
be held in 32 CLog buffers) distance apart, that way ideally for each update it will
try to access Clog page that is not in-memory, however as the value to update
is getting selected randomly and that leads to every 100th access as disk access.

What about just running a regular pgbench test, but hacking the
XID-assignment code so that we increment the XID counter by 100 each
time instead of 1?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#10Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#9)
Re: Speed up Clog Access by increasing CLOG buffers

On Fri, Sep 11, 2015 at 9:21 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Fri, Sep 11, 2015 at 10:31 AM, Amit Kapila <amit.kapila16@gmail.com>

wrote:

Could you perhaps try to create a testcase where xids are accessed

that

are so far apart on average that they're unlikely to be in memory? And
then test that across a number of client counts?

Now about the test, create a table with large number of rows (say

11617457,

I have tried to create larger, but it was taking too much time (more

than a day))

and have each row with different transaction id. Now each transaction

should

update rows that are at least 1048576 (number of transactions whose

status can

be held in 32 CLog buffers) distance apart, that way ideally for each

update it will

try to access Clog page that is not in-memory, however as the value to

update

is getting selected randomly and that leads to every 100th access as

disk access.

What about just running a regular pgbench test, but hacking the
XID-assignment code so that we increment the XID counter by 100 each
time instead of 1?

If I am not wrong we need 1048576 number of transactions difference
for each record to make each CLOG access a disk access, so if we
increment XID counter by 100, then probably every 10000th (or multiplier
of 10000) transaction would go for disk access.

The number 1048576 is derived by below calc:
#define CLOG_XACTS_PER_BYTE 4
#define CLOG_XACTS_PER_PAGE (BLCKSZ * CLOG_XACTS_PER_BYTE)

Transaction difference required for each transaction to go for disk access:
CLOG_XACTS_PER_PAGE * num_clog_buffers.

I think reducing to every 100th access for transaction status as disk access
is sufficient to prove that there is no regression with the patch for the
screnario
asked by Andres or do you think it is not?

Now another possibility here could be that we try by commenting out fsync
in CLOG path to see how much it impact the performance of this test and
then for pgbench test. I am not sure there will be any impact because even
every 100th transaction goes to disk access that is still less as compare
WAL fsync which we have to perform for each transaction.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

#11Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#10)
Re: Speed up Clog Access by increasing CLOG buffers

On Fri, Sep 11, 2015 at 11:01 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

If I am not wrong we need 1048576 number of transactions difference
for each record to make each CLOG access a disk access, so if we
increment XID counter by 100, then probably every 10000th (or multiplier
of 10000) transaction would go for disk access.

The number 1048576 is derived by below calc:
#define CLOG_XACTS_PER_BYTE 4
#define CLOG_XACTS_PER_PAGE (BLCKSZ * CLOG_XACTS_PER_BYTE)

Transaction difference required for each transaction to go for disk access:
CLOG_XACTS_PER_PAGE * num_clog_buffers.

I think reducing to every 100th access for transaction status as disk access
is sufficient to prove that there is no regression with the patch for the
screnario
asked by Andres or do you think it is not?

I have no idea. I was just suggesting that hacking the server somehow
might be an easier way of creating the scenario Andres was interested
in than the process you described. But feel free to ignore me, I
haven't taken much time to think about this.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Jesper Pedersen
jesper.pedersen@redhat.com
In reply to: Amit Kapila (#8)
Re: Speed up Clog Access by increasing CLOG buffers

On 09/11/2015 10:31 AM, Amit Kapila wrote:

Updated comments and the patch (increate_clog_bufs_v2.patch)
containing the same is attached.

I have done various runs on an Intel Xeon 28C/56T w/ 256Gb mem and 2 x
RAID10 SSD (data + xlog) with Min(64,).

Kept the shared_buffers=64GB and effective_cache_size=160GB settings
across all runs, but did runs with both synchronous_commit on and off
and different scale factors for pgbench.

The results are in flux for all client numbers within -2 to +2%
depending on the latency average.

So no real conclusion from here other than the patch doesn't help/hurt
performance on this setup, likely depends on further CLogControlLock
related changes to see real benefit.

Best regards,
Jesper

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#13Amit Kapila
amit.kapila16@gmail.com
In reply to: Jesper Pedersen (#12)
Re: Speed up Clog Access by increasing CLOG buffers

On Fri, Sep 18, 2015 at 11:08 PM, Jesper Pedersen <
jesper.pedersen@redhat.com> wrote:

On 09/11/2015 10:31 AM, Amit Kapila wrote:

Updated comments and the patch (increate_clog_bufs_v2.patch)
containing the same is attached.

I have done various runs on an Intel Xeon 28C/56T w/ 256Gb mem and 2 x
RAID10 SSD (data + xlog) with Min(64,).

The benefit with this patch could be seen at somewhat higher
client-count as you can see in my initial mail, can you please
once try with client count > 64?

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

In reply to: Amit Kapila (#1)
Re: Speed up Clog Access by increasing CLOG buffers

On Mon, Aug 31, 2015 at 9:49 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

Increasing CLOG buffers to 64 helps in reducing the contention due to second
reason. Experiments revealed that increasing CLOG buffers only helps
once the contention around ProcArrayLock is reduced.

There has been a lot of research on bitmap compression, more or less
for the benefit of bitmap index access methods.

Simple techniques like run length encoding are effective for some
things. If the need to map the bitmap into memory to access the status
of transactions is a concern, there has been work done on that, too.
Byte-aligned bitmap compression is a technique that might offer a good
trade-off between compression clog, and decompression overhead -- I
think that there basically is no decompression overhead, because set
operations can be performed on the "compressed" representation
directly. There are other techniques, too.

Something to consider. There could be multiple benefits to compressing
clog, even beyond simply avoiding managing clog buffers.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15Jesper Pedersen
jesper.pedersen@redhat.com
In reply to: Amit Kapila (#13)
Re: Speed up Clog Access by increasing CLOG buffers

On 09/18/2015 11:11 PM, Amit Kapila wrote:

I have done various runs on an Intel Xeon 28C/56T w/ 256Gb mem and 2 x
RAID10 SSD (data + xlog) with Min(64,).

The benefit with this patch could be seen at somewhat higher
client-count as you can see in my initial mail, can you please
once try with client count > 64?

Client count were from 1 to 80.

I did do one run with Min(128,) like you, but didn't see any difference
in the result compared to Min(64,), so focused instead in the
sync_commit on/off testing case.

Best regards,
Jesper

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Jeff Janes
jeff.janes@gmail.com
In reply to: Amit Kapila (#10)
Re: Speed up Clog Access by increasing CLOG buffers

On Fri, Sep 11, 2015 at 8:01 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Fri, Sep 11, 2015 at 9:21 PM, Robert Haas <robertmhaas@gmail.com>
wrote:

On Fri, Sep 11, 2015 at 10:31 AM, Amit Kapila <amit.kapila16@gmail.com>

wrote:

Could you perhaps try to create a testcase where xids are accessed

that

are so far apart on average that they're unlikely to be in memory?

And

then test that across a number of client counts?

Now about the test, create a table with large number of rows (say

11617457,

I have tried to create larger, but it was taking too much time (more

than a day))

and have each row with different transaction id. Now each transaction

should

update rows that are at least 1048576 (number of transactions whose

status can

be held in 32 CLog buffers) distance apart, that way ideally for each

update it will

try to access Clog page that is not in-memory, however as the value to

update

is getting selected randomly and that leads to every 100th access as

disk access.

What about just running a regular pgbench test, but hacking the
XID-assignment code so that we increment the XID counter by 100 each
time instead of 1?

If I am not wrong we need 1048576 number of transactions difference
for each record to make each CLOG access a disk access, so if we
increment XID counter by 100, then probably every 10000th (or multiplier
of 10000) transaction would go for disk access.

The number 1048576 is derived by below calc:
#define CLOG_XACTS_PER_BYTE 4
#define CLOG_XACTS_PER_PAGE (BLCKSZ * CLOG_XACTS_PER_BYTE)

Transaction difference required for each transaction to go for disk access:
CLOG_XACTS_PER_PAGE * num_clog_buffers.

That guarantees that every xid occupies its own 32-contiguous-pages chunk
of clog.

But clog pages are not pulled in and out in 32-page chunks, but one page
chunks. So you would only need 32,768 differences to get every real
transaction to live on its own clog page, which means every look up of a
different real transaction would have to do a page replacement. (I think
your references to disk access here are misleading. Isn't the issue here
the contention on the lock that controls the page replacement, not the
actual IO?)

I've attached a patch that allows you set the guc "JJ_xid",which makes it
burn the given number of xids every time one new one is asked for. (The
patch introduces lots of other stuff as well, but I didn't feel like
ripping the irrelevant parts out--if you don't set any of the other gucs it
introduces from their defaults, they shouldn't cause you trouble.) I think
there are other tools around that do the same thing, but this is the one I
know about. It is easy to drive the system into wrap-around shutdown with
this, so lowering autovacuum_vacuum_cost_delay is a good idea.

Actually I haven't attached it, because then the commitfest app will list
it as the patch needing review, instead I've put it here
https://drive.google.com/file/d/0Bzqrh1SO9FcERV9EUThtT3pacmM/view?usp=sharing

I think reducing to every 100th access for transaction status as disk access

is sufficient to prove that there is no regression with the patch for the
screnario
asked by Andres or do you think it is not?

Now another possibility here could be that we try by commenting out fsync
in CLOG path to see how much it impact the performance of this test and
then for pgbench test. I am not sure there will be any impact because even
every 100th transaction goes to disk access that is still less as compare
WAL fsync which we have to perform for each transaction.

You mentioned that your clog is not on ssd, but surely at this scale of
hardware, the hdd the clog is on has a bbu in front of it, no?

But I thought Andres' concern was not about fsync, but about the fact that
the SLRU does linear scans (repeatedly) of the buffers while holding the
control lock? At some point, scanning more and more buffers under the lock
is going to cause more contention than scanning fewer buffers and just
evicting a page will.

Cheers,

Jeff

#17Amit Kapila
amit.kapila16@gmail.com
In reply to: Jeff Janes (#16)
Re: Speed up Clog Access by increasing CLOG buffers

On Mon, Oct 5, 2015 at 6:34 AM, Jeff Janes <jeff.janes@gmail.com> wrote:

On Fri, Sep 11, 2015 at 8:01 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

If I am not wrong we need 1048576 number of transactions difference
for each record to make each CLOG access a disk access, so if we
increment XID counter by 100, then probably every 10000th (or multiplier
of 10000) transaction would go for disk access.

The number 1048576 is derived by below calc:
#define CLOG_XACTS_PER_BYTE 4
#define CLOG_XACTS_PER_PAGE (BLCKSZ * CLOG_XACTS_PER_BYTE)

Transaction difference required for each transaction to go for disk
access:
CLOG_XACTS_PER_PAGE * num_clog_buffers.

That guarantees that every xid occupies its own 32-contiguous-pages chunk
of clog.

But clog pages are not pulled in and out in 32-page chunks, but one page
chunks. So you would only need 32,768 differences to get every real
transaction to live on its own clog page, which means every look up of a
different real transaction would have to do a page replacement.

Agreed, but that doesn't effect the test result with the test done above.

(I think your references to disk access here are misleading. Isn't the
issue here the contention on the lock that controls the page replacement,
not the actual IO?)

The point is that if there is no I/O needed, then all the read-access for
transaction status will just use Shared locks, however if there is an I/O,
then it would need an Exclusive lock.

I've attached a patch that allows you set the guc "JJ_xid",which makes it
burn the given number of xids every time one new one is asked for. (The
patch introduces lots of other stuff as well, but I didn't feel like
ripping the irrelevant parts out--if you don't set any of the other gucs it
introduces from their defaults, they shouldn't cause you trouble.) I think
there are other tools around that do the same thing, but this is the one I
know about. It is easy to drive the system into wrap-around shutdown with
this, so lowering autovacuum_vacuum_cost_delay is a good idea.

Actually I haven't attached it, because then the commitfest app will list
it as the patch needing review, instead I've put it here
https://drive.google.com/file/d/0Bzqrh1SO9FcERV9EUThtT3pacmM/view?usp=sharing

Thanks, I think probably this could also be used for testing.

I think reducing to every 100th access for transaction status as disk

access
is sufficient to prove that there is no regression with the patch for the
screnario
asked by Andres or do you think it is not?

Now another possibility here could be that we try by commenting out fsync
in CLOG path to see how much it impact the performance of this test and
then for pgbench test. I am not sure there will be any impact because
even
every 100th transaction goes to disk access that is still less as compare
WAL fsync which we have to perform for each transaction.

You mentioned that your clog is not on ssd, but surely at this scale of
hardware, the hdd the clog is on has a bbu in front of it, no?

Yes.

But I thought Andres' concern was not about fsync, but about the fact that
the SLRU does linear scans (repeatedly) of the buffers while holding the
control lock? At some point, scanning more and more buffers under the lock
is going to cause more contention than scanning fewer buffers and just
evicting a page will.

Yes, at some point, that could matter, but I could not see the impact
at 64 or 128 number of Clog buffers.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

#18Amit Kapila
amit.kapila16@gmail.com
In reply to: Jesper Pedersen (#15)
Re: Speed up Clog Access by increasing CLOG buffers

On Mon, Sep 21, 2015 at 6:25 PM, Jesper Pedersen <jesper.pedersen@redhat.com

wrote:

On 09/18/2015 11:11 PM, Amit Kapila wrote:

I have done various runs on an Intel Xeon 28C/56T w/ 256Gb mem and 2 x

RAID10 SSD (data + xlog) with Min(64,).

The benefit with this patch could be seen at somewhat higher

client-count as you can see in my initial mail, can you please
once try with client count > 64?

Client count were from 1 to 80.

I did do one run with Min(128,) like you, but didn't see any difference in
the result compared to Min(64,), so focused instead in the sync_commit
on/off testing case.

I think the main focus for test in this area would be at higher client
count. At what scale factors have you taken the data and what are
the other non-default settings you have used. By the way, have you
tried by dropping and recreating the database and restarting the server
after each run, can you share the exact steps you have used to perform
the tests. I am not sure why it is not showing the benefit in your testing,
may be the benefit is on some what more higher end m/c or it could be
that some of the settings used for test are not same as mine or the way
to test the read-write workload of pgbench is different.

In anycase, I went ahead and tried further reducing the CLogControlLock
contention by grouping the transaction status updates. The basic idea
is same as is used to reduce the ProcArrayLock contention [1]/messages/by-id/CAA4eK1JbX4FzPHigNt0JSaz30a85BPJV+ewhk+wg_o-T6xufEA@mail.gmail.com which is to
allow one of the proc to become leader and update the transaction status for
other active transactions in system. This has helped to reduce the
contention
around CLOGControlLock. Attached patch group_update_clog_v1.patch
implements this idea.

I have taken performance data with this patch to see the impact at
various scale-factors. All the data is for cases when data fits in shared
buffers and is taken against commit - 5c90a2ff on server with below
configuration and non-default postgresql.conf settings.

Performance Data
-----------------------------
RAM - 500GB
8 sockets, 64 cores(Hyperthreaded128 threads total)

Non-default parameters
------------------------------------
max_connections = 300
shared_buffers=8GB
min_wal_size=10GB
max_wal_size=15GB
checkpoint_timeout =35min
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.9
wal_buffers = 256MB

Refer attached files for performance data.

sc_300_perf.png - This data indicates that at scale_factor 300, there is a
gain of ~15% at higher client counts, without degradation at lower client
count.
different_sc_perf.png - At various scale factors, there is a gain from
~15% to 41% at higher client counts and in some cases we see gain
of ~5% at somewhat moderate client count (64) as well.
perf_write_clogcontrollock_data_v1.ods - Detailed performance data at
various client counts and scale factors.

Feel free to ask for more details if the data in attached files is not
clear.

Below is the LWLock_Stats information with and without patch:

Stats Data
---------
A. scale_factor = 300; shared_buffers=32GB; client_connections - 128

HEAD - 5c90a2ff
----------------
CLogControlLock Data
------------------------
PID 94100 lwlock main 11: shacq 678672 exacq 326477 blk 204427 spindelay
8532 dequeue self 93192
PID 94129 lwlock main 11: shacq 757047 exacq 363176 blk 207840 spindelay
8866 dequeue self 96601
PID 94115 lwlock main 11: shacq 721632 exacq 345967 blk 207665 spindelay
8595 dequeue self 96185
PID 94011 lwlock main 11: shacq 501900 exacq 241346 blk 173295 spindelay
7882 dequeue self 78134
PID 94087 lwlock main 11: shacq 653701 exacq 314311 blk 201733 spindelay
8419 dequeue self 92190

After Patch group_update_clog_v1
----------------
CLogControlLock Data
------------------------
PID 100205 lwlock main 11: shacq 836897 exacq 176007 blk 116328 spindelay
1206 dequeue self 54485
PID 100034 lwlock main 11: shacq 437610 exacq 91419 blk 77523 spindelay 994
dequeue self 35419
PID 100175 lwlock main 11: shacq 748948 exacq 158970 blk 114027 spindelay
1277 dequeue self 53486
PID 100162 lwlock main 11: shacq 717262 exacq 152807 blk 115268 spindelay
1227 dequeue self 51643
PID 100214 lwlock main 11: shacq 856044 exacq 180422 blk 113695 spindelay
1202 dequeue self 54435

The above data indicates that contention due to CLogControlLock is
reduced by around 50% with this patch.

The reasons for remaining contention could be:

1. Readers of clog data (checking transaction status data) can take
Exclusive CLOGControlLock when reading the page from disk, this can
contend with other Readers (shared lockers of CLogControlLock) and with
exclusive locker which updates transaction status. One of the ways to
mitigate this contention is to increase the number of CLOG buffers for which
patch has been already posted on this thread.

2. Readers of clog data (checking transaction status data) takes shared
CLOGControlLock which can contend with exclusive locker (Group leader) which
updates transaction status. I have tried to reduce the amount of work done
by group leader, by allowing group leader to just read the Clog page once
for all the transactions in the group which updated the same CLOG page
(idea similar to what we currently we use for updating the status of
transactions
having sub-transaction tree), but that hasn't given any further performance
boost,
so I left it.

I think we can use some other ways as well to reduce the contention around
CLOGControlLock by doing somewhat major surgery around SLRU like using
buffer pools similar to shared buffers, but this idea gives us moderate
improvement without much impact on exiting mechanism.

Thoughts?

[1]: /messages/by-id/CAA4eK1JbX4FzPHigNt0JSaz30a85BPJV+ewhk+wg_o-T6xufEA@mail.gmail.com
/messages/by-id/CAA4eK1JbX4FzPHigNt0JSaz30a85BPJV+ewhk+wg_o-T6xufEA@mail.gmail.com

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Attachments:

group_update_clog_v1.patchapplication/octet-stream; name=group_update_clog_v1.patchDownload+242-25
sc_300_perf.pngimage/png; name=sc_300_perf.pngDownload
different_sc_perf.pngimage/png; name=different_sc_perf.pngDownload
perf_write_clogcontrollock_data_v1.odsapplication/vnd.oasis.opendocument.spreadsheet; name=perf_write_clogcontrollock_data_v1.odsDownload+1-0
#19Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Geoghegan (#14)
Re: Speed up Clog Access by increasing CLOG buffers

On Mon, Sep 21, 2015 at 6:34 AM, Peter Geoghegan <pg@heroku.com> wrote:

On Mon, Aug 31, 2015 at 9:49 PM, Amit Kapila <amit.kapila16@gmail.com>

wrote:

Increasing CLOG buffers to 64 helps in reducing the contention due to

second

reason. Experiments revealed that increasing CLOG buffers only helps
once the contention around ProcArrayLock is reduced.

There has been a lot of research on bitmap compression, more or less
for the benefit of bitmap index access methods.

Simple techniques like run length encoding are effective for some
things. If the need to map the bitmap into memory to access the status
of transactions is a concern, there has been work done on that, too.
Byte-aligned bitmap compression is a technique that might offer a good
trade-off between compression clog, and decompression overhead -- I
think that there basically is no decompression overhead, because set
operations can be performed on the "compressed" representation
directly. There are other techniques, too.

I could see benefits of doing compression for CLOG, but I think it won't
be straight forward, other than handling of compression and decompression,
currently code relies on transaction id to find the clog page, that will
not work after compression or we need to do some changes in that mapping
to make it work. Also I think it could avoid the increase of clog buffers
which
can help readers, but it won't help much for contention around clog
updates for transaction status.

Overall this idea sounds promising, but I think the work involved is more
than the benefit I am expecting for the current optimization we are
discussing.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

#20Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#19)
Re: Speed up Clog Access by increasing CLOG buffers

On Tue, Nov 17, 2015 at 1:32 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Mon, Sep 21, 2015 at 6:34 AM, Peter Geoghegan <pg@heroku.com> wrote:

On Mon, Aug 31, 2015 at 9:49 PM, Amit Kapila <amit.kapila16@gmail.com>

wrote:

Increasing CLOG buffers to 64 helps in reducing the contention due to

second

reason. Experiments revealed that increasing CLOG buffers only helps
once the contention around ProcArrayLock is reduced.

Overall this idea sounds promising, but I think the work involved is more
than the benefit I am expecting for the current optimization we are
discussing.

Sorry, I think last line is slightly confusing, let me try to again write
it:

Overall this idea sounds promising, but I think the work involved is more
than the benefit expected from the current optimization we are
discussing.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

#21Simon Riggs
simon@2ndQuadrant.com
In reply to: Amit Kapila (#18)
#22Amit Kapila
amit.kapila16@gmail.com
In reply to: Simon Riggs (#21)
#23Simon Riggs
simon@2ndQuadrant.com
In reply to: Amit Kapila (#22)
#24Amit Kapila
amit.kapila16@gmail.com
In reply to: Simon Riggs (#23)
#25Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#24)
#26Simon Riggs
simon@2ndQuadrant.com
In reply to: Amit Kapila (#24)
#27Amit Kapila
amit.kapila16@gmail.com
In reply to: Simon Riggs (#26)
#28Amit Kapila
amit.kapila16@gmail.com
In reply to: Simon Riggs (#26)
#29Jeff Janes
jeff.janes@gmail.com
In reply to: Amit Kapila (#28)
#30Amit Kapila
amit.kapila16@gmail.com
In reply to: Jeff Janes (#29)
#31Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#28)
#32Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#31)
#33Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#32)
#34Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#33)
#35Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#34)
#36Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#35)
#37Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#36)
#38Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#37)
#39Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#38)
#40Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#39)
#41Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#40)
#42Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#40)
#43Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#42)
#44Thom Brown
thom@linux.com
In reply to: Amit Kapila (#43)
#45Amit Kapila
amit.kapila16@gmail.com
In reply to: Thom Brown (#44)
#46Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#45)
#47Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#46)
#48Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#47)
#49Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#48)
#50Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#49)
#51Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#50)
#52Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#51)
#53Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#52)
#54Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#53)
#55Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#54)
#56Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#55)
#57Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#56)
#58Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#57)
#59Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#58)
#60Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#58)
#61David Steele
david@pgmasters.net
In reply to: Amit Kapila (#57)
#62Amit Kapila
amit.kapila16@gmail.com
In reply to: David Steele (#61)
#63David Steele
david@pgmasters.net
In reply to: Amit Kapila (#62)
#64Amit Kapila
amit.kapila16@gmail.com
In reply to: David Steele (#63)
#65Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: David Steele (#61)
#66Jesper Pedersen
jesper.pedersen@redhat.com
In reply to: Amit Kapila (#62)
#67Amit Kapila
amit.kapila16@gmail.com
In reply to: Jesper Pedersen (#66)
#68Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#67)
#69Andres Freund
andres@anarazel.de
In reply to: Amit Kapila (#62)
#70Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#69)
#71Andres Freund
andres@anarazel.de
In reply to: Amit Kapila (#70)
#72Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#71)
#73Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#69)
#74Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#73)
#75Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Amit Kapila (#72)
#76Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#69)
#77Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#76)
#78Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#68)
#79Andres Freund
andres@anarazel.de
In reply to: Amit Kapila (#77)
#80Andres Freund
andres@anarazel.de
In reply to: Andres Freund (#79)
#81Andres Freund
andres@anarazel.de
In reply to: Andres Freund (#80)
#82Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#80)
#83Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#82)
#84Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#82)
#85Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#82)
#86Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#8)
#87Andres Freund
andres@anarazel.de
In reply to: Amit Kapila (#86)
#88Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#87)
#89Andres Freund
andres@anarazel.de
In reply to: Amit Kapila (#88)
#90Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#89)
#91Andres Freund
andres@anarazel.de
In reply to: Amit Kapila (#90)
#92Jesper Pedersen
jesper.pedersen@redhat.com
In reply to: Andres Freund (#87)
#93Andres Freund
andres@anarazel.de
In reply to: Jesper Pedersen (#92)
#94Jesper Pedersen
jesper.pedersen@redhat.com
In reply to: Andres Freund (#93)
#95Andres Freund
andres@anarazel.de
In reply to: Jesper Pedersen (#94)
#96Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#89)
#97Jesper Pedersen
jesper.pedersen@redhat.com
In reply to: Andres Freund (#95)
#98Amit Kapila
amit.kapila16@gmail.com
In reply to: Jesper Pedersen (#97)
#99Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#96)
#100Andres Freund
andres@anarazel.de
In reply to: Amit Kapila (#99)
#101Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#100)
#102Andres Freund
andres@anarazel.de
In reply to: Amit Kapila (#101)
#103Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#102)
#104Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#102)
#105Andres Freund
andres@anarazel.de
In reply to: Amit Kapila (#88)
#106Andres Freund
andres@anarazel.de
In reply to: Amit Kapila (#104)
#107Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#105)
#108Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#107)
#109Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#108)
#110Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Tomas Vondra (#108)
#111Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#110)
#112Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#109)
#113Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#112)
#114Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#113)
#115Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#114)
#116Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#115)
#117Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#109)
#118Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#117)
#119Robert Haas
robertmhaas@gmail.com
In reply to: Dilip Kumar (#117)
#120Dilip Kumar
dilipbalaut@gmail.com
In reply to: Robert Haas (#119)
#121Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Dilip Kumar (#120)
#122Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#121)
#123Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#122)
#124Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Robert Haas (#119)
#125Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#123)
#126Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#124)
#127Dilip Kumar
dilipbalaut@gmail.com
In reply to: Tomas Vondra (#121)
#128Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#126)
#129Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#128)
#130Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#129)
#131Dilip Kumar
dilipbalaut@gmail.com
In reply to: Tomas Vondra (#130)
#132Robert Haas
robertmhaas@gmail.com
In reply to: Tomas Vondra (#130)
#133Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#132)
#134Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#132)
#135Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#134)
#136Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Robert Haas (#132)
#137Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#135)
#138Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#136)
#139Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#138)
#140Robert Haas
robertmhaas@gmail.com
In reply to: Tomas Vondra (#139)
#141Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Robert Haas (#140)
#142Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#141)
#143Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#139)
#144Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#143)
#145Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#139)
#146Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Tomas Vondra (#144)
#147Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#145)
#148Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#146)
#149Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#140)
#150Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#147)
#151Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Pavan Deolasee (#146)
#152Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#150)
#153Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#152)
#154Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#153)
#155Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#149)
#156Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#154)
#157Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#137)
#158Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#156)
#159Robert Haas
robertmhaas@gmail.com
In reply to: Tomas Vondra (#158)
#160Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Robert Haas (#159)
#161Robert Haas
robertmhaas@gmail.com
In reply to: Tomas Vondra (#160)
#162Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Robert Haas (#161)
#163Dilip Kumar
dilipbalaut@gmail.com
In reply to: Tomas Vondra (#162)
#164Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#163)
#165Robert Haas
robertmhaas@gmail.com
In reply to: Tomas Vondra (#162)
#166Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Robert Haas (#165)
#167Robert Haas
robertmhaas@gmail.com
In reply to: Tomas Vondra (#166)
#168Dilip Kumar
dilipbalaut@gmail.com
In reply to: Robert Haas (#167)
#169Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Dilip Kumar (#168)
#170Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#169)
#171Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#170)
#172Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#171)
#173Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#172)
#174Dilip Kumar
dilipbalaut@gmail.com
In reply to: Tomas Vondra (#173)
#175Robert Haas
robertmhaas@gmail.com
In reply to: Dilip Kumar (#174)
#176Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Robert Haas (#175)
#177Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#176)
#178Dilip Kumar
dilipbalaut@gmail.com
In reply to: Robert Haas (#175)
#179Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Dilip Kumar (#178)
#180Robert Haas
robertmhaas@gmail.com
In reply to: Dilip Kumar (#178)
#181Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#180)
#182Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Robert Haas (#181)
#183Robert Haas
robertmhaas@gmail.com
In reply to: Tomas Vondra (#182)
#184Dilip Kumar
dilipbalaut@gmail.com
In reply to: Tomas Vondra (#179)
#185Dilip Kumar
dilipbalaut@gmail.com
In reply to: Robert Haas (#180)
#186Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#183)
#187Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#186)
#188Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#187)
#189Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#184)
#190Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#189)
#191Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#190)
#192Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#191)
#193Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#192)
#194Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#192)
#195Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Tomas Vondra (#194)
#196Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Jim Nasby (#195)
#197Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#194)
#198Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#194)
#199Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#197)
#200Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#198)
#201Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#200)
#202Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#201)
#203Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#196)
#204Robert Haas
robertmhaas@gmail.com
In reply to: Tomas Vondra (#203)
#205Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Robert Haas (#204)
#206Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#205)
#207Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#206)
#208Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#206)
#209Robert Haas
robertmhaas@gmail.com
In reply to: Tomas Vondra (#205)
#210Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#209)
#211Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Amit Kapila (#210)
#212Amit Kapila
amit.kapila16@gmail.com
In reply to: Haribabu Kommi (#211)
#213Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Amit Kapila (#212)
#214Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#208)
#215Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#214)
#216Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#215)
#217Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#215)
#218Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#217)
#219Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#218)
#220Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#219)
#221Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#220)
#222Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#221)
#223Robert Haas
robertmhaas@gmail.com
In reply to: Michael Paquier (#222)
#224Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#223)
#225Amit Kapila
amit.kapila16@gmail.com
In reply to: Tom Lane (#224)
#226Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#224)
#227Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#226)
#228Amit Kapila
amit.kapila16@gmail.com
In reply to: Tom Lane (#227)
#229Tom Lane
tgl@sss.pgh.pa.us
In reply to: Amit Kapila (#228)
#230Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#228)
#231Amit Kapila
amit.kapila16@gmail.com
In reply to: Tom Lane (#229)
#232Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#231)
#233Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#232)
#234Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Robert Haas (#233)
#235Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#232)
#236Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#235)
#237Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#236)
#238Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#237)
#239Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#238)
#240Ashutosh Sharma
ashu.coek88@gmail.com
In reply to: Amit Kapila (#239)
#241Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#223)
#242Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Sharma (#240)
#243Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#242)
#244Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#243)
#245Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#244)
#246Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#245)
#247Robert Haas
robertmhaas@gmail.com
In reply to: Dilip Kumar (#246)
#248Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#247)