PATCH: logical_work_mem and logical streaming of large in-progress transactions

Started by Tomas Vondraover 8 years ago563 messageshackers
Jump to latest
#1Tomas Vondra
tomas.vondra@2ndquadrant.com

Hi all,

Attached is a patch series that implements two features to the logical
replication - ability to define a memory limit for the reorderbuffer
(responsible for building the decoded transactions), and ability to
stream large in-progress transactions (exceeding the memory limit).

I'm submitting those two changes together, because one builds on the
other, and it's beneficial to discuss them together.

PART 1: adding logical_work_mem memory limit (0001)
---------------------------------------------------

Currently, limiting the amount of memory consumed by logical decoding is
tricky (or you might say impossible) for several reasons:

* The value is hard-coded, so it's not quite possible to customize it.

* The amount of decoded changes to keep in memory is restricted by
number of changes. It's not very unclear how this relates to memory
consumption, as the change size depends on table structure, etc.

* The number is "per (sub)transaction", so a transaction with many
subtransactions may easily consume significant amount of memory without
actually hitting the limit.

So the patch does two things. Firstly, it introduces logical_work_mem, a
GUC restricting memory consumed by all transactions currently kept in
the reorder buffer.

Secondly, it adds a simple memory accounting by tracking the amount of
memory used in total (for the whole reorder buffer, to compare against
logical_work_mem) and per transaction (so that we can quickly pick
transaction to spill to disk).

The one wrinkle on the patch is that the memory limit can't be enforced
when reading changes spilled to disk - with multiple subtransactions, we
can't easily predict how many changes to pre-read for each of them. At
that point we still use the existing max_changes_in_memory limit.

Luckily, changes introduced in the other parts of the patch should allow
addressing this deficiency.

PART 2: streaming of large in-progress transactions (0002-0006)
---------------------------------------------------------------

Note: This part is split into multiple smaller chunks, addressing
different parts of the logical decoding infrastructure. That's mostly to
allow easier reviews, though. Ultimately, it's just one patch.

Processing large transactions often results in significant apply lag,
for a couple of reasons. One reason is network bandwidth - while we do
decode the changes incrementally (as we read the WAL), we keep them
locally, either in memory, or spilled to files. Then at commit time, all
the changes get sent to the downstream (and applied) at the same time.
For large transactions the time to do the network transfer may be
significant, causing apply lag.

This patch extends the logical replication infrastructure (output plugin
API, reorder buffer, pgoutput, replication protocol etc.) so allow
streaming of in-progress transactions instead of spilling them to local
files.

The extensions to the API are pretty straightforward. Aside from adding
methods to stream changes/messages and commit a streamed transaction,
the API needs a function to abort a streamed (sub)transaction, and
functions to demarcate a block of streamed changes.

To decode a transaction, we need to know all it's subtransactions, and
invalidations. Currently, those are only known at commit time (although
some assignments may be known earlier), but invalidations are only ever
written in the commit record.

So far that was fine, because we only decode/replay transactions at
commit time, when all of this is known (because it's either in commit
record, or written before it).

But for in-progress transactions (i.e. the subject of interest here),
that is not the case. So the patch modifies WAL-logging to ensure those
two bits of information are written immediately (for wal_level=logical).

For assignments that was fairly simple, thanks to existing caching. For
invalidations, it requires a new WAL record type and a couple of changes
in inval.c.

On the apply side, we simply receive the streamed changes, write them
into a file (one file for toplevel transaction, which is possible thanks
to the assignments being known immediately). And then at commit time the
changes are replayed locally, without having to copy a large chunk of
data over network.

WAL overhead
------------

Of course, these changes to WAL logging are not for free - logging
assignments individually (instead of multiple subtransactions at once)
means higher xlog record overhead. Similarly, (sub)transactions doing a
lot of DDL may result in a lot of invalidations written to WAL (again,
with full xlog record overhead per invalidation).

I've done a number of tests to measure the impact, and for extreme
corner cases the additional amount of WAL is about 40% in both cases.

By an "extreme corner case" I mean a workloads intentionally triggering
many assignments/invalidations, without doing a lot of meaningful work.

For assignments, imagine a single-row table (no indexes), and a
transaction like this one:

BEGIN;
UPDATE t SET v = v + 1;
SAVEPOINT s1;
UPDATE t SET v = v + 1;
SAVEPOINT s2;
UPDATE t SET v = v + 1;
SAVEPOINT s3;
...
UPDATE t SET v = v + 1;
SAVEPOINT s10;
UPDATE t SET v = v + 1;
COMMIT;

For invalidations, add a CREATE TEMPORARY TABLE to each subtransaction.

For more realistic workloads (large table with indexes, runs long enough
to generate FPIs, etc.) the overhead drops below 5%. Which is much more
acceptable, of course, although not perfect.

In both cases, there was pretty much no measurable impact on performance
(as measured by tps).

I do not think there's a way around this requirement (having assignments
and invalidations), if we want to decode in-progress transactions. But
perhaps it would be possible to do some sort of caching (say, at command
level), to reduce the xlog record overhead? Not sure.

All ideas are welcome, of course. In the worst case, I think we can add
a GUC enabling this additional logging - when disabled, streaming of
in-progress transactions would not be possible.

Simplifying ReorderBuffer
-------------------------

One interesting consequence of having assignments is that we could get
rid of the ReorderBuffer iterator, used to merge changes from subxacts.
The assignments allow us to keep changes for each toplevel transaction
in a single list, in LSN order, and just walk it. Abort can be performed
by remembering position of the first change in each subxact, and just
discarding the tail.

This is what the apply worker does with the streamed changes and aborts.

It would also allow us to enforce the memory limit while restoring
transactions spilled to disk, because we would not have the problem with
restoring changes for many subtransactions.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

0001-Introduce-logical_work_mem-to-limit-ReorderBuffer-me.patch.gzapplication/gzip; name=0001-Introduce-logical_work_mem-to-limit-ReorderBuffer-me.patch.gzDownload
0002-Issue-XLOG_XACT_ASSIGNMENT-with-wal_level-logical.patch.gzapplication/gzip; name=0002-Issue-XLOG_XACT_ASSIGNMENT-with-wal_level-logical.patch.gzDownload
0003-Issue-individual-invalidations-with-wal_level-logica.patch.gzapplication/gzip; name=0003-Issue-individual-invalidations-with-wal_level-logica.patch.gzDownload
0004-Extend-the-output-plugin-API-with-stream-methods.patch.gzapplication/gzip; name=0004-Extend-the-output-plugin-API-with-stream-methods.patch.gzDownload
0005-Implement-streaming-mode-in-ReorderBuffer.patch.gzapplication/gzip; name=0005-Implement-streaming-mode-in-ReorderBuffer.patch.gzDownload
0006-Add-support-for-streaming-to-built-in-replication.patch.gzapplication/gzip; name=0006-Add-support-for-streaming-to-built-in-replication.patch.gzDownload
#2Erik Rijkers
er@xs4all.nl
In reply to: Tomas Vondra (#1)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

On 2017-12-23 05:57, Tomas Vondra wrote:

Hi all,

Attached is a patch series that implements two features to the logical
replication - ability to define a memory limit for the reorderbuffer
(responsible for building the decoded transactions), and ability to
stream large in-progress transactions (exceeding the memory limit).

logical replication of 2 instances is OK but 3 and up fail with:

TRAP: FailedAssertion("!(last_lsn < change->lsn)", File:
"reorderbuffer.c", Line: 1773)

I can cobble up a script but I hope you have enough from the assertion
to see what's going wrong...

#3Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Erik Rijkers (#2)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

On 12/23/2017 03:03 PM, Erikjan Rijkers wrote:

On 2017-12-23 05:57, Tomas Vondra wrote:

Hi all,

Attached is a patch series that implements two features to the logical
replication - ability to define a memory limit for the reorderbuffer
(responsible for building the decoded transactions), and ability to
stream large in-progress transactions (exceeding the memory limit).

logical replication of 2 instances is OK but 3 and up fail with:

TRAP: FailedAssertion("!(last_lsn < change->lsn)", File:
"reorderbuffer.c", Line: 1773)

I can cobble up a script but I hope you have enough from the assertion
to see what's going wrong...

The assertion says that the iterator produces changes in order that does
not correlate with LSN. But I have a hard time understanding how that
could happen, particularly because according to the line number this
happens in ReorderBufferCommit(), i.e. the current (non-streaming) case.

So instructions to reproduce the issue would be very helpful.

Attached is v2 of the patch series, fixing two bugs I discovered today.
I don't think any of these is related to your issue, though.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

0001-Introduce-logical_work_mem-to-limit-ReorderBuffer-v2.patch.gzapplication/gzip; name=0001-Introduce-logical_work_mem-to-limit-ReorderBuffer-v2.patch.gzDownload
0002-Issue-XLOG_XACT_ASSIGNMENT-with-wal_level-logical-v2.patch.gzapplication/gzip; name=0002-Issue-XLOG_XACT_ASSIGNMENT-with-wal_level-logical-v2.patch.gzDownload
0003-Issue-individual-invalidations-with-wal_level-log-v2.patch.gzapplication/gzip; name=0003-Issue-individual-invalidations-with-wal_level-log-v2.patch.gzDownload
0004-Extend-the-output-plugin-API-with-stream-methods-v2.patch.gzapplication/gzip; name=0004-Extend-the-output-plugin-API-with-stream-methods-v2.patch.gzDownload
0005-Implement-streaming-mode-in-ReorderBuffer-v2.patch.gzapplication/gzip; name=0005-Implement-streaming-mode-in-ReorderBuffer-v2.patch.gzDownload
0006-Add-support-for-streaming-to-built-in-replication-v2.patch.gzapplication/gzip; name=0006-Add-support-for-streaming-to-built-in-replication-v2.patch.gzDownload
#4Erik Rijkers
er@xs4all.nl
In reply to: Tomas Vondra (#3)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

On 2017-12-23 21:06, Tomas Vondra wrote:

On 12/23/2017 03:03 PM, Erikjan Rijkers wrote:

On 2017-12-23 05:57, Tomas Vondra wrote:

Hi all,

Attached is a patch series that implements two features to the
logical
replication - ability to define a memory limit for the reorderbuffer
(responsible for building the decoded transactions), and ability to
stream large in-progress transactions (exceeding the memory limit).

logical replication of 2 instances is OK but 3 and up fail with:

TRAP: FailedAssertion("!(last_lsn < change->lsn)", File:
"reorderbuffer.c", Line: 1773)

I can cobble up a script but I hope you have enough from the assertion
to see what's going wrong...

The assertion says that the iterator produces changes in order that
does
not correlate with LSN. But I have a hard time understanding how that
could happen, particularly because according to the line number this
happens in ReorderBufferCommit(), i.e. the current (non-streaming)
case.

So instructions to reproduce the issue would be very helpful.

Using:

0001-Introduce-logical_work_mem-to-limit-ReorderBuffer-v2.patch
0002-Issue-XLOG_XACT_ASSIGNMENT-with-wal_level-logical-v2.patch
0003-Issue-individual-invalidations-with-wal_level-log-v2.patch
0004-Extend-the-output-plugin-API-with-stream-methods-v2.patch
0005-Implement-streaming-mode-in-ReorderBuffer-v2.patch
0006-Add-support-for-streaming-to-built-in-replication-v2.patch

As you expected the problem is the same with these new patches.

I have now tested more, and seen that it not always fails. I guess that
it here fails 3 times out of 4. But the laptop I'm using at the moment
is old and slow -- it may well be a factor as we've seen before [1]/messages/by-id/3897361c7010c4ac03f358173adbcd60@xs4all.nl.

Attached is the bash that I put together. I tested with
NUM_INSTANCES=2, which yields success, and NUM_INSTANCES=3, which fails
often. This same program run with HEAD never seems to fail (I tried a
few dozen times).

thanks,

Erik Rijkers

[1]: /messages/by-id/3897361c7010c4ac03f358173adbcd60@xs4all.nl
/messages/by-id/3897361c7010c4ac03f358173adbcd60@xs4all.nl

Attachments:

test.shtext/x-shellscript; name=test.shDownload
#5Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Erik Rijkers (#4)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

On 12/23/2017 11:23 PM, Erik Rijkers wrote:

On 2017-12-23 21:06, Tomas Vondra wrote:

On 12/23/2017 03:03 PM, Erikjan Rijkers wrote:

On 2017-12-23 05:57, Tomas Vondra wrote:

Hi all,

Attached is a patch series that implements two features to the logical
replication - ability to define a memory limit for the reorderbuffer
(responsible for building the decoded transactions), and ability to
stream large in-progress transactions (exceeding the memory limit).

logical replication of 2 instances is OK but 3 and up fail with:

TRAP: FailedAssertion("!(last_lsn < change->lsn)", File:
"reorderbuffer.c", Line: 1773)

I can cobble up a script but I hope you have enough from the assertion
to see what's going wrong...

The assertion says that the iterator produces changes in order that does
not correlate with LSN. But I have a hard time understanding how that
could happen, particularly because according to the line number this
happens in ReorderBufferCommit(), i.e. the current (non-streaming) case.

So instructions to reproduce the issue would be very helpful.

Using:

0001-Introduce-logical_work_mem-to-limit-ReorderBuffer-v2.patch
0002-Issue-XLOG_XACT_ASSIGNMENT-with-wal_level-logical-v2.patch
0003-Issue-individual-invalidations-with-wal_level-log-v2.patch
0004-Extend-the-output-plugin-API-with-stream-methods-v2.patch
0005-Implement-streaming-mode-in-ReorderBuffer-v2.patch
0006-Add-support-for-streaming-to-built-in-replication-v2.patch

As you expected the problem is the same with these new patches.

I have now tested more, and seen that it not always fails.� I guess that
it here fails 3 times out of 4.� But the laptop I'm using at the moment
is old and slow -- it may well be a factor as we've seen before [1].

Attached is the bash that I put together.� I tested with
NUM_INSTANCES=2, which yields success, and NUM_INSTANCES=3, which fails
often.� This same program run with HEAD never seems to fail (I tried a
few dozen times).

Thanks. Unfortunately I still can't reproduce the issue. I even tried
running it in valgrind, to see if there are some memory access issues
(which should also slow it down significantly).

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#6Craig Ringer
craig@2ndquadrant.com
In reply to: Tomas Vondra (#1)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

On 23 December 2017 at 12:57, Tomas Vondra <tomas.vondra@2ndquadrant.com>
wrote:

Hi all,

Attached is a patch series that implements two features to the logical
replication - ability to define a memory limit for the reorderbuffer
(responsible for building the decoded transactions), and ability to
stream large in-progress transactions (exceeding the memory limit).

I'm submitting those two changes together, because one builds on the
other, and it's beneficial to discuss them together.

PART 1: adding logical_work_mem memory limit (0001)
---------------------------------------------------

Currently, limiting the amount of memory consumed by logical decoding is
tricky (or you might say impossible) for several reasons:

* The value is hard-coded, so it's not quite possible to customize it.

* The amount of decoded changes to keep in memory is restricted by
number of changes. It's not very unclear how this relates to memory
consumption, as the change size depends on table structure, etc.

* The number is "per (sub)transaction", so a transaction with many
subtransactions may easily consume significant amount of memory without
actually hitting the limit.

Also, even without subtransactions, we assemble a ReorderBufferTXN per
transaction. Since transactions usually occur concurrently, systems with
many concurrent txns can face lots of memory use.

We can't exclude tables that won't actually be replicated at the reorder
buffering phase either. So txns use memory whether or not they do anything
interesting as far as a given logical decoding session is concerned. Even
if we'll throw all the data away we must buffer and assemble it first so we
can make that decision.

Because logical decoding considers snapshots and cid increments even from
other DBs (at least when the txn makes catalog changes) the memory use can
get BIG too. I was recently working with a system that had accumulated 2GB
of snapshots ... on each slot. With 7 slots, one for each DB.

So there's lots of room for difficulty with unpredictable memory use.

So the patch does two things. Firstly, it introduces logical_work_mem, a

GUC restricting memory consumed by all transactions currently kept in
the reorder buffer

Does this consider the (currently high, IIRC) overhead of tracking
serialized changes?

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#7Erik Rijkers
er@xs4all.nl
In reply to: Tomas Vondra (#5)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

logical replication of 2 instances is OK but 3 and up fail with:

TRAP: FailedAssertion("!(last_lsn < change->lsn)", File:
"reorderbuffer.c", Line: 1773)

I can cobble up a script but I hope you have enough from the
assertion
to see what's going wrong...

The assertion says that the iterator produces changes in order that
does
not correlate with LSN. But I have a hard time understanding how that
could happen, particularly because according to the line number this
happens in ReorderBufferCommit(), i.e. the current (non-streaming)
case.

So instructions to reproduce the issue would be very helpful.

Using:

0001-Introduce-logical_work_mem-to-limit-ReorderBuffer-v2.patch
0002-Issue-XLOG_XACT_ASSIGNMENT-with-wal_level-logical-v2.patch
0003-Issue-individual-invalidations-with-wal_level-log-v2.patch
0004-Extend-the-output-plugin-API-with-stream-methods-v2.patch
0005-Implement-streaming-mode-in-ReorderBuffer-v2.patch
0006-Add-support-for-streaming-to-built-in-replication-v2.patch

As you expected the problem is the same with these new patches.

I have now tested more, and seen that it not always fails.  I guess
that
it here fails 3 times out of 4.  But the laptop I'm using at the
moment
is old and slow -- it may well be a factor as we've seen before [1].

Attached is the bash that I put together.  I tested with
NUM_INSTANCES=2, which yields success, and NUM_INSTANCES=3, which
fails
often.  This same program run with HEAD never seems to fail (I tried a
few dozen times).

Thanks. Unfortunately I still can't reproduce the issue. I even tried
running it in valgrind, to see if there are some memory access issues
(which should also slow it down significantly).

One wonders again if 2ndquadrant shouldn't invest in some old hardware
;)

Another Good Thing would be if there was a provision in the buildfarm to
test patches like these.

But I'm probably not to first one to suggest that; no doubt it'll be
possible someday. In the meantime I'll try to repeat this crash on
other machines (but that will be after the holidays).

Erik Rijkers

#8Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Craig Ringer (#6)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

On 12/24/2017 05:51 AM, Craig Ringer wrote:

On 23 December 2017 at 12:57, Tomas Vondra <tomas.vondra@2ndquadrant.com
<mailto:tomas.vondra@2ndquadrant.com>> wrote:

Hi all,

Attached is a patch series that implements two features to the logical
replication - ability to define a memory limit for the reorderbuffer
(responsible for building the decoded transactions), and ability to
stream large in-progress transactions (exceeding the memory limit).

I'm submitting those two changes together, because one builds on the
other, and it's beneficial to discuss them together.

PART 1: adding logical_work_mem memory limit (0001)
---------------------------------------------------

Currently, limiting the amount of memory consumed by logical decoding is
tricky (or you might say impossible) for several reasons:

* The value is hard-coded, so it's not quite possible to customize it.

* The amount of decoded changes to keep in memory is restricted by
number of changes. It's not very unclear how this relates to memory
consumption, as the change size depends on table structure, etc.

* The number is "per (sub)transaction", so a transaction with many
subtransactions may easily consume significant amount of memory without
actually hitting the limit.

Also, even without subtransactions, we assemble a ReorderBufferTXN
per transaction. Since transactions usually occur concurrently,
systems with many concurrent txns can face lots of memory use.

I don't see how that could be a problem, considering the number of
toplevel transactions is rather limited (to max_connections or so).

We can't exclude tables that won't actually be replicated at the reorder
buffering phase either. So txns use memory whether or not they do
anything interesting as far as a given logical decoding session is
concerned. Even if we'll throw all the data away we must buffer and
assemble it first so we can make that decision.

Yep.

Because logical decoding considers snapshots and cid increments even
from other DBs (at least when the txn makes catalog changes) the memory
use can get BIG too. I was recently working with a system that had
accumulated 2GB of snapshots ... on each slot. With 7 slots, one for
each DB.

So there's lots of room for difficulty with unpredictable memory use.

Yep.

So the patch does two things. Firstly, it introduces logical_work_mem, a
GUC restricting memory consumed by all transactions currently kept in
the reorder buffer

Does this consider the (currently high, IIRC) overhead of tracking
serialized changes?
 

Consider in what sense?

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#9Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Erik Rijkers (#7)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

On 12/24/2017 10:00 AM, Erik Rijkers wrote:

logical replication of 2 instances is OK but 3 and up fail with:

TRAP: FailedAssertion("!(last_lsn < change->lsn)", File:
"reorderbuffer.c", Line: 1773)

I can cobble up a script but I hope you have enough from the assertion
to see what's going wrong...

The assertion says that the iterator produces changes in order that
does
not correlate with LSN. But I have a hard time understanding how that
could happen, particularly because according to the line number this
happens in ReorderBufferCommit(), i.e. the current (non-streaming)
case.

So instructions to reproduce the issue would be very helpful.

Using:

0001-Introduce-logical_work_mem-to-limit-ReorderBuffer-v2.patch
0002-Issue-XLOG_XACT_ASSIGNMENT-with-wal_level-logical-v2.patch
0003-Issue-individual-invalidations-with-wal_level-log-v2.patch
0004-Extend-the-output-plugin-API-with-stream-methods-v2.patch
0005-Implement-streaming-mode-in-ReorderBuffer-v2.patch
0006-Add-support-for-streaming-to-built-in-replication-v2.patch

As you expected the problem is the same with these new patches.

I have now tested more, and seen that it not always fails.  I guess that
it here fails 3 times out of 4.  But the laptop I'm using at the moment
is old and slow -- it may well be a factor as we've seen before [1].

Attached is the bash that I put together.  I tested with
NUM_INSTANCES=2, which yields success, and NUM_INSTANCES=3, which fails
often.  This same program run with HEAD never seems to fail (I tried a
few dozen times).

Thanks. Unfortunately I still can't reproduce the issue. I even tried
running it in valgrind, to see if there are some memory access issues
(which should also slow it down significantly).

One wonders again if 2ndquadrant shouldn't invest in some old hardware ;)

Well, I've done tests on various machines, including some really slow
ones, and I still haven't managed to reproduce the failures using your
script. So I don't think that would really help. But I have reproduced
it by using a custom stress test script.

Turns out the asserts are overly strict - instead of

Assert(prev_lsn < current_lsn);

it should have been

Assert(prev_lsn <= current_lsn);

because some XLOG records may contain multiple rows (e.g. MULTI_INSERT).

The attached v3 fixes this issue, and also a couple of other thinkos:

1) The AssertChangeLsnOrder assert check was somewhat broken.

2) We've been sending aborts for all subtransactions, even those not yet
streamed. So downstream got confused and fell over because of an assert.

3) The streamed transactions were written to /tmp, using filenames using
subscription OID and XID of the toplevel transaction. That's fine, as
long as there's just a single replica running - if there are more, the
filenames will clash, causing really strange failures. So move the files
to base/pgsql_tmp where regular temporary files are written. I'm not
claiming this is perfect, perhaps we need to invent another location.

FWIW I believe the relation sync cache is somewhat broken by the
streaming. I thought resetting it would be good enough, but it's more
complicated (and trickier) than that. I'm aware of it, and I'll look
into that next - but probably not before 2018.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

0001-Introduce-logical_work_mem-to-limit-ReorderBuffer-v3.patch.gzapplication/gzip; name=0001-Introduce-logical_work_mem-to-limit-ReorderBuffer-v3.patch.gzDownload
0002-Issue-XLOG_XACT_ASSIGNMENT-with-wal_level-logical-v3.patch.gzapplication/gzip; name=0002-Issue-XLOG_XACT_ASSIGNMENT-with-wal_level-logical-v3.patch.gzDownload
0003-Issue-individual-invalidations-with-wal_level-log-v3.patch.gzapplication/gzip; name=0003-Issue-individual-invalidations-with-wal_level-log-v3.patch.gzDownload
0004-Extend-the-output-plugin-API-with-stream-methods-v3.patch.gzapplication/gzip; name=0004-Extend-the-output-plugin-API-with-stream-methods-v3.patch.gzDownload
0005-Implement-streaming-mode-in-ReorderBuffer-v3.patch.gzapplication/gzip; name=0005-Implement-streaming-mode-in-ReorderBuffer-v3.patch.gzDownload
0006-Add-support-for-streaming-to-built-in-replication-v3.patch.gzapplication/gzip; name=0006-Add-support-for-streaming-to-built-in-replication-v3.patch.gzDownload
#10Erik Rijkers
er@xs4all.nl
In reply to: Tomas Vondra (#9)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

That indeed fixed the problem: running that same pgbench test, I see no
crashes anymore (on any of 3 different machines, and with several
pgbench parameters).

Thank you,

Erik Rijkers

#11Dmitry Dolgov
9erthalion6@gmail.com
In reply to: Erik Rijkers (#10)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

On 25 December 2017 at 18:40, Tomas Vondra <tomas.vondra@2ndquadrant.com>

wrote:

The attached v3 fixes this issue, and also a couple of other thinkos

Thank you for the patch, it looks quite interesting. After a quick look at
it
(mostly the first one so far, but I'm going to continue) I have a few
questions:

+ * XXX With many subtransactions this might be quite slow, because we'll

have

+ * to walk through all of them. There are some options how we could

improve

+ * that: (a) maintain some secondary structure with transactions sorted

by

+ * amount of changes, (b) not looking for the entirely largest

transaction,

+ * but e.g. for transaction using at least some fraction of the memory

limit,

+ * and (c) evicting multiple transactions at once, e.g. to free a given

portion

+ * of the memory limit (e.g. 50%).

Do you want to address these possible alternatives somehow in this patch or
you
want to left it outside? Maybe it makes sense to apply some combination of
them, e.g. maintain a secondary structure with relatively large
transactions,
and then start evicting them. If it's somehow not enough, then start to
evict
multiple transactions at once (option "c").

+ /*
+ * We clamp manually-set values to at least 64kB. The

maintenance_work_mem

+ * uses a higher minimum value (1MB), so this is OK.
+ */
+ if (*newval < 64)
+ *newval = 64;
+

I'm not sure what's recommended practice here, but maybe it makes sense to
have a warning here about changing this value to 64kB? Otherwise it can be
unexpected.

#12Peter Eisentraut
peter_e@gmx.net
In reply to: Tomas Vondra (#1)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

On 12/22/17 23:57, Tomas Vondra wrote:

PART 1: adding logical_work_mem memory limit (0001)
---------------------------------------------------

The documentation in this patch contains some references to later
features (streaming). Perhaps that could be separated so that the
patches can be applied independently.

I don't see the need to tie this setting to maintenance_work_mem.
maintenance_work_mem is often set to very large values, which could then
have undesirable side effects on this use.

Moreover, the name logical_work_mem makes it sound like it's a logical
version of work_mem. Maybe we could think of another name.

I think we need a way to report on how much memory is actually used, so
the setting can be tuned. Something analogous to log_temp_files perhaps.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#13Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Peter Eisentraut (#12)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

On 01/02/2018 04:07 PM, Peter Eisentraut wrote:

On 12/22/17 23:57, Tomas Vondra wrote:

PART 1: adding logical_work_mem memory limit (0001)
---------------------------------------------------

The documentation in this patch contains some references to later
features (streaming). Perhaps that could be separated so that the
patches can be applied independently.

Yeah, that's probably a good idea. But now that you mention it, I wonder
if "streaming" is really a good term. We already use it for "streaming
replication" and it may be quite confusing to use it for another feature
(particularly when it's streaming within logical streaming replication).

But I can't really think of a better name ...

I don't see the need to tie this setting to maintenance_work_mem.
maintenance_work_mem is often set to very large values, which could
then have undesirable side effects on this use.

Well, we need to pick some default value, and we can either use a fixed
value (not sure what would be a good default) or tie it to an existing
GUC. We only really have work_mem and maintenance_work_mem, and the
walsender process will never use more than one such buffer. Which seems
to be closer to maintenance_work_mem.

Pretty much any default value can have undesirable side effects.

Moreover, the name logical_work_mem makes it sound like it's a logical
version of work_mem. Maybe we could think of another name.

I won't object to a better name, of course. Any proposals?

I think we need a way to report on how much memory is actually used,
so the setting can be tuned. Something analogous to log_temp_files
perhaps.

Yes, I agree. I'm just about to submit an updated version of the patch
series, that also introduces a few columns pg_stat_replication, tracking
this type of stats (amount of data spilled to disk or streamed, etc.).

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#14Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#1)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

Hi,

attached is v4 of the patch series, with a couple of changes:

1) Fixes a bunch of bugs I discovered during stress testing.

I'm not going to go into details, but the main fixes are related to
properly updating progress from the worker, and not streaming when
creating the logical replication slot.

2) Introduces columns into pg_stat_replication.

The new columns track various kinds of statistics (number of xacts,
bytes, ...) about spill-to-disk/streaming. This will be useful when
tuning the GUC memory limit.

3) Two temporary bugfixes that make the patch series work.

The first one (0008) makes sure is_known_subxact is set properly for all
subtransactions, and there's a separate fix in the CF. So this will
eventually go away.

The second one (0009) fixes an issue that is specific to streaming. It
does fix the issue, but I need a bit more time to think about it before
merging it into 0005.

This does pass extensive stress testing with a workload mixing DML, DDL,
subtransactions, aborts, etc. under valgrind. I'm working on extending
the test coverage, and introducing various error conditions (e.g.
walsender/walreceiver timeouts, failures on both ends, etc.).

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

0006-Add-support-for-streaming-to-built-in-replication-v4.patch.gzapplication/gzip; name=0006-Add-support-for-streaming-to-built-in-replication-v4.patch.gzDownload+1-0
0007-Track-statistics-for-streaming-spilling-v4.patch.gzapplication/gzip; name=0007-Track-statistics-for-streaming-spilling-v4.patch.gzDownload
0001-Introduce-logical_work_mem-to-limit-ReorderBuffer-v4.patch.gzapplication/gzip; name=0001-Introduce-logical_work_mem-to-limit-ReorderBuffer-v4.patch.gzDownload
0002-Issue-XLOG_XACT_ASSIGNMENT-with-wal_level-logical-v4.patch.gzapplication/gzip; name=0002-Issue-XLOG_XACT_ASSIGNMENT-with-wal_level-logical-v4.patch.gzDownload
0003-Issue-individual-invalidations-with-wal_level-log-v4.patch.gzapplication/gzip; name=0003-Issue-individual-invalidations-with-wal_level-log-v4.patch.gzDownload
0004-Extend-the-output-plugin-API-with-stream-methods-v4.patch.gzapplication/gzip; name=0004-Extend-the-output-plugin-API-with-stream-methods-v4.patch.gzDownload
0005-Implement-streaming-mode-in-ReorderBuffer-v4.patch.gzapplication/gzip; name=0005-Implement-streaming-mode-in-ReorderBuffer-v4.patch.gzDownload
0008-BUGFIX-make-sure-subxact-is-marked-as-is_known_as-v4.patch.gzapplication/gzip; name=0008-BUGFIX-make-sure-subxact-is-marked-as-is_known_as-v4.patch.gzDownload
0009-BUGFIX-set-final_lsn-for-subxacts-before-cleanup-v4.patch.gzapplication/gzip; name=0009-BUGFIX-set-final_lsn-for-subxacts-before-cleanup-v4.patch.gzDownload
#15Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#14)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

On 01/03/2018 09:06 PM, Tomas Vondra wrote:

Hi,

attached is v4 of the patch series, with a couple of changes:

1) Fixes a bunch of bugs I discovered during stress testing.

I'm not going to go into details, but the main fixes are related to
properly updating progress from the worker, and not streaming when
creating the logical replication slot.

2) Introduces columns into pg_stat_replication.

The new columns track various kinds of statistics (number of xacts,
bytes, ...) about spill-to-disk/streaming. This will be useful when
tuning the GUC memory limit.

3) Two temporary bugfixes that make the patch series work.

Forgot to mention that the v4 also extends the CREATE SUBSCRIPTION to
allow customizing the streaming and memory limit. So you can do

CREATE SUBSCRIPTION ... WITH (streaming=on, work_mem=1024)

and this subscription will allow streaming, and the logica_work_mem (on
provider) will be set to 1MB.

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#16Peter Eisentraut
peter_e@gmx.net
In reply to: Tomas Vondra (#13)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

On 1/3/18 14:53, Tomas Vondra wrote:

I don't see the need to tie this setting to maintenance_work_mem.
maintenance_work_mem is often set to very large values, which could
then have undesirable side effects on this use.

Well, we need to pick some default value, and we can either use a fixed
value (not sure what would be a good default) or tie it to an existing
GUC. We only really have work_mem and maintenance_work_mem, and the
walsender process will never use more than one such buffer. Which seems
to be closer to maintenance_work_mem.

Pretty much any default value can have undesirable side effects.

Let's just make it an independent setting unless we know any better. We
don't have a lot of settings that depend on other settings, and the ones
we do have a very specific relationship.

Moreover, the name logical_work_mem makes it sound like it's a logical
version of work_mem. Maybe we could think of another name.

I won't object to a better name, of course. Any proposals?

logical_decoding_[work_]mem?

I think we need a way to report on how much memory is actually used,
so the setting can be tuned. Something analogous to log_temp_files
perhaps.

Yes, I agree. I'm just about to submit an updated version of the patch
series, that also introduces a few columns pg_stat_replication, tracking
this type of stats (amount of data spilled to disk or streamed, etc.).

That seems OK. Perhaps we could bring forward the part of that patch
that applies to this feature.

That would also help testing *this* feature and determine what
appropriate settings are.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#17Peter Eisentraut
peter_e@gmx.net
In reply to: Tomas Vondra (#15)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

On 1/3/18 15:13, Tomas Vondra wrote:

Forgot to mention that the v4 also extends the CREATE SUBSCRIPTION to
allow customizing the streaming and memory limit. So you can do

CREATE SUBSCRIPTION ... WITH (streaming=on, work_mem=1024)

and this subscription will allow streaming, and the logica_work_mem (on
provider) will be set to 1MB.

I was wondering already during PG10 development whether we should give
subscriptions a generic configuration array, like databases and roles
have, so we don't have to hardcode a bunch of similar stuff every time
we add an option like this. At the time we only had synchronous_commit,
but now we're adding more.

Also, instead of sticking this into the START_REPLICATION command, could
we just run a SET command? That should work over replication
connections as well.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#18Peter Eisentraut
peter_e@gmx.net
In reply to: Tomas Vondra (#1)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

On 12/22/17 23:57, Tomas Vondra wrote:

PART 1: adding logical_work_mem memory limit (0001)
---------------------------------------------------

Currently, limiting the amount of memory consumed by logical decoding is
tricky (or you might say impossible) for several reasons:

I would like to see some more discussion on this, but I think not a lot
of people understand the details, so I'll try to write up an explanation
here. This code is also somewhat new to me, so please correct me if
there are inaccuracies, while keeping in mind that I'm trying to simplify.

The data in the WAL is written as it happens, so the changes belonging
to different transactions are all mixed together. One of the jobs of
logical decoding is to reassemble the changes belonging to each
transaction. The top-level data structure for that is the infamous
ReorderBuffer. So as it reads the WAL and sees something about a
transaction, it keeps a copy of that change in memory, indexed by
transaction ID (ReorderBufferChange). When the transaction commits, the
accumulated changes are passed to the output plugin and then freed. If
the transaction aborts, then changes are just thrown away.

So when logical decoding is active, a copy of the changes for each
active transaction is kept in memory (once per walsender).

More precisely, the above happens for each subtransaction. When the
top-level transaction commits, it finds all its subtransactions in the
ReorderBuffer, reassembles everything in the right order, then invokes
the output plugin.

All this could end up using an unbounded amount of memory, so there is a
mechanism to spill changes to disk. The way this currently works is
hardcoded, and this patch proposes to change that.

Currently, when a transaction or subtransaction has accumulated 4096
changes, it is spilled to disk. When the top-level transaction commits,
things are read back from disk to do the final processing mentioned above.

This all works mostly fine, but you can construct some more extreme
cases where this can blow up.

Here is a mundane example. Let's say a change entry takes 100 bytes (it
might contain a new row, or an update key and some new column values,
for example). If you have 100 concurrent active sessions and no
subtransactions, then logical decoding memory is bounded by 4096 * 100 *
100 = 40 MB (per walsender) before things spill to disk.

Now let's say you are using a lot of subtransactions, because you are
using PL functions, exception handling, triggers, doing batch updates.
If you have 200 subtransactions on average per concurrent session, the
memory usage bound in that case would be 4096 * 100 * 100 * 200 = 8 GB
(per walsender). And so on. If you have more concurrent sessions or
larger changes or more subtransactions, you'll use much more than those
8 GB. And if you don't have those 8 GB, then you're stuck at this point.

That is the consideration when we record changes, but we also need
memory when we do the final processing at commit time. That is slightly
less problematic because we only process one top-level transaction at a
time, so the formula is only 4096 * avg_size_of_changes * nr_subxacts
(without the concurrent sessions factor).

So, this patch proposes to improve this as follows:

- We compute the actual size of each ReorderBufferChange and keep a
running tally for each transaction, instead of just counting the number
of changes.

- We have a configuration setting that allows us to change the limit
instead of the hardcoded 4096. The configuration setting is also in
terms of memory, not in number of changes.

- The configuration setting is for the total memory usage per decoding
session, not per subtransaction. (So we also keep a running tally for
the entire ReorderBuffer.)

There are two open issues with this patch:

One, this mechanism only applies when recording changes. The processing
at commit time still uses the previous hardcoded mechanism. The reason
for this is, AFAIU, that as things currently work, you have to have all
subtransactions in memory to do the final processing. There are some
proposals to change this as well, but they are more involved. Arguably,
per my explanation above, memory use at commit time is less likely to be
a problem.

Two, what to do when the memory limit is reached. With the old
accounting, this was easy, because we'd decide for each subtransaction
independently whether to spill it to disk, when it has reached its 4096
limit. Now, we are looking at a global limit, so we have to find a
transaction to spill in some other way. The proposed patch searches
through the entire list of transactions to find the largest one. But as
the patch says:

"XXX With many subtransactions this might be quite slow, because we'll
have to walk through all of them. There are some options how we could
improve that: (a) maintain some secondary structure with transactions
sorted by amount of changes, (b) not looking for the entirely largest
transaction, but e.g. for transaction using at least some fraction of
the memory limit, and (c) evicting multiple transactions at once, e.g.
to free a given portion of the memory limit (e.g. 50%)."

(a) would create more overhead for the case where everything fits into
memory, so it seems unattractive. Some combination of (b) and (c) seems
useful, but we'd have to come up with something concrete.

Thoughts?

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#19Bruce Momjian
bruce@momjian.us
In reply to: Peter Eisentraut (#18)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

On 11 January 2018 at 19:41, Peter Eisentraut
<peter.eisentraut@2ndquadrant.com> wrote:

Two, what to do when the memory limit is reached. With the old
accounting, this was easy, because we'd decide for each subtransaction
independently whether to spill it to disk, when it has reached its 4096
limit. Now, we are looking at a global limit, so we have to find a
transaction to spill in some other way. The proposed patch searches
through the entire list of transactions to find the largest one. But as
the patch says:

"XXX With many subtransactions this might be quite slow, because we'll
have to walk through all of them. There are some options how we could
improve that: (a) maintain some secondary structure with transactions
sorted by amount of changes, (b) not looking for the entirely largest
transaction, but e.g. for transaction using at least some fraction of
the memory limit, and (c) evicting multiple transactions at once, e.g.
to free a given portion of the memory limit (e.g. 50%)."

AIUI spilling to disk doesn't affect absorbing future updates, we
would just keep accumulating them in memory right? We won't need to
unspill until it comes time to commit.

Is there any actual advantage to picking the largest transaction? it
means fewer spills and fewer unspills at commit time but that just a
bigger spike of i/o and more of a chance of spilling more than
necessary to get by. In the end it'll be more or less the same amount
of data read back, just all in one big spike when spilling and one big
spike when committing. If you spilled smaller transactions you would
have a small amount of i/o more frequently and have to read back small
amounts for many commits. But it would add up to the same amount of
i/o (or less if you avoid spilling more than necessary).

The real aim should be to try to pick the transaction that will be
committed furthest in the future. That gives you the most memory to
use for live transactions for the longest time and could let you
process the maximum amount of transactions without spilling them. So
either the oldest transaction (in the expectation that it's been open
a while and appears to be a long-lived batch job that will stay open
for a long time) or the youngest transaction (in the expectation that
all transactions are more or less equally long-lived) might make
sense.

--
greg

#20Peter Eisentraut
peter_e@gmx.net
In reply to: Bruce Momjian (#19)
Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

On 1/11/18 18:23, Greg Stark wrote:

AIUI spilling to disk doesn't affect absorbing future updates, we
would just keep accumulating them in memory right? We won't need to
unspill until it comes time to commit.

Once a transaction has been serialized, future updates keep accumulating
in memory, until perhaps it gets serialized again. But then at commit
time, if a transaction has been partially serialized at all, all the
remaining changes are also serialized before the whole thing is read
back in (see reorderbuffer.c line 855).

So one optimization would be to specially keep track of all transactions
that have been serialized already and pick those first for further
serialization, because it will be done eventually anyway.

But this is only a secondary optimization, because it doesn't help in
the extreme cases that either no (or few) transactions have been
serialized or all (or most) transactions have been serialized.

The real aim should be to try to pick the transaction that will be
committed furthest in the future. That gives you the most memory to
use for live transactions for the longest time and could let you
process the maximum amount of transactions without spilling them. So
either the oldest transaction (in the expectation that it's been open
a while and appears to be a long-lived batch job that will stay open
for a long time) or the youngest transaction (in the expectation that
all transactions are more or less equally long-lived) might make
sense.

Yes, that makes sense. We'd still need to keep a separate ordered list
of transactions somewhere, but that might be easier if we just order
them in the order we see them.

--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#21Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Peter Eisentraut (#18)
#22Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Peter Eisentraut (#20)
#23Peter Eisentraut
peter_e@gmx.net
In reply to: Tomas Vondra (#22)
#24Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#14)
#25Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#24)
#26Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Tomas Vondra (#25)
#27Peter Eisentraut
peter_e@gmx.net
In reply to: Peter Eisentraut (#23)
#28Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Masahiko Sawada (#26)
#29Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Peter Eisentraut (#27)
#30Andres Freund
andres@anarazel.de
In reply to: Tomas Vondra (#29)
#31Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Andres Freund (#30)
#32David Steele
david@pgmasters.net
In reply to: Tomas Vondra (#31)
#33Andres Freund
andres@anarazel.de
In reply to: David Steele (#32)
#34Robert Haas
robertmhaas@gmail.com
In reply to: Tomas Vondra (#31)
#35David Steele
david@pgmasters.net
In reply to: Robert Haas (#34)
#36Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: David Steele (#35)
#37Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#28)
#38Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Andres Freund (#33)
#39Andres Freund
andres@anarazel.de
In reply to: Tomas Vondra (#38)
#40Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Andres Freund (#39)
#41Andres Freund
andres@anarazel.de
In reply to: Tomas Vondra (#40)
#42David Steele
david@pgmasters.net
In reply to: Andres Freund (#39)
#43Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: David Steele (#42)
#44David Steele
david@pgmasters.net
In reply to: Tomas Vondra (#43)
#45Erik Rijkers
er@xs4all.nl
In reply to: Tomas Vondra (#37)
#46Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Erik Rijkers (#45)
#47Peter Eisentraut
peter_e@gmx.net
In reply to: Tomas Vondra (#46)
#48Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Peter Eisentraut (#18)
#49Peter Eisentraut
peter_e@gmx.net
In reply to: Peter Eisentraut (#47)
#50Michael Paquier
michael@paquier.xyz
In reply to: Tomas Vondra (#46)
#51Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Peter Eisentraut (#49)
#52Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#51)
#53Alexey Kondratov
a.kondratov@postgrespro.ru
In reply to: Tomas Vondra (#51)
#54Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Alexey Kondratov (#53)
#55Alexey Kondratov
a.kondratov@postgrespro.ru
In reply to: Tomas Vondra (#54)
#56Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Alexey Kondratov (#55)
#57Alexey Kondratov
a.kondratov@postgrespro.ru
In reply to: Tomas Vondra (#56)
#58Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Alexey Kondratov (#57)
#59Michael Paquier
michael@paquier.xyz
In reply to: Tomas Vondra (#58)
#60Alexey Kondratov
a.kondratov@postgrespro.ru
In reply to: Tomas Vondra (#58)
#61Alexey Kondratov
a.kondratov@postgrespro.ru
In reply to: Alexey Kondratov (#57)
#62Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Alexey Kondratov (#61)
#63Alexey Kondratov
a.kondratov@postgrespro.ru
In reply to: Tomas Vondra (#62)
#64Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Alexey Kondratov (#63)
#65Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Tomas Vondra (#64)
#66Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tomas Vondra (#1)
#67Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Alvaro Herrera (#66)
#68Alexey Kondratov
a.kondratov@postgrespro.ru
In reply to: Konstantin Knizhnik (#65)
#69Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Alexey Kondratov (#68)
#70Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Konstantin Knizhnik (#69)
#71Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#66)
#72Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#67)
#73Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#71)
#74Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#72)
#75Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tomas Vondra (#73)
#76Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#75)
#77Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Alvaro Herrera (#76)
#78Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Alvaro Herrera (#75)
#79Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#74)
#80Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Eisentraut (#16)
#81Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#73)
#82Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#80)
#83Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#82)
#84Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#73)
#85Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#83)
#86Dilip Kumar
dilipbalaut@gmail.com
In reply to: Tomas Vondra (#73)
#87Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#85)
#88Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Amit Kapila (#87)
#89Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Alvaro Herrera (#88)
#90Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#87)
#91Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#90)
#92Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#91)
#93Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#92)
#94Dilip Kumar
dilipbalaut@gmail.com
In reply to: Tomas Vondra (#93)
#95Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#93)
#96Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#94)
#97Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#95)
#98Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#97)
#99Craig Ringer
craig@2ndquadrant.com
In reply to: Amit Kapila (#98)
#100Amit Kapila
amit.kapila16@gmail.com
In reply to: Craig Ringer (#99)
#101Dilip Kumar
dilipbalaut@gmail.com
In reply to: Tomas Vondra (#93)
#102Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#101)
#103Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#102)
#104Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#103)
#105Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#104)
#106Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#102)
#107Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#94)
#108Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#107)
#109Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Dilip Kumar (#106)
#110Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Dilip Kumar (#108)
#111Alexey Kondratov
a.kondratov@postgrespro.ru
In reply to: Tomas Vondra (#110)
#112Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#109)
#113Amit Kapila
amit.kapila16@gmail.com
In reply to: Alexey Kondratov (#111)
#114Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#106)
#115vignesh C
vignesh21@gmail.com
In reply to: Tomas Vondra (#110)
#116Dilip Kumar
dilipbalaut@gmail.com
In reply to: vignesh C (#115)
#117Kuntal Ghosh
kuntalghosh.2007@gmail.com
In reply to: Dilip Kumar (#116)
#118Dilip Kumar
dilipbalaut@gmail.com
In reply to: Kuntal Ghosh (#117)
#119Kuntal Ghosh
kuntalghosh.2007@gmail.com
In reply to: Dilip Kumar (#118)
#120vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#114)
#121Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#115)
#122Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#121)
#123vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#120)
#124vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#121)
#125Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#123)
#126Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#125)
#127Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#126)
#128Alexey Kondratov
a.kondratov@postgrespro.ru
In reply to: Kuntal Ghosh (#119)
#129Kuntal Ghosh
kuntalghosh.2007@gmail.com
In reply to: Alexey Kondratov (#128)
#130Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#94)
#131Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#130)
#132Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#131)
#133Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#132)
#134Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#133)
#135Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#134)
#136Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#135)
#137Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#127)
#138Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#136)
#139Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#138)
#140Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#137)
#141Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#139)
#142Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#141)
#143Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#142)
#144Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#140)
#145Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#143)
#146Michael Paquier
michael@paquier.xyz
In reply to: Dilip Kumar (#145)
#147Dilip Kumar
dilipbalaut@gmail.com
In reply to: Michael Paquier (#146)
#148Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#147)
#149Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#147)
#150Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#149)
#151Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#148)
#152Robert Haas
robertmhaas@gmail.com
In reply to: Dilip Kumar (#147)
#153Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#151)
#154Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#152)
#155Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#152)
#156Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#153)
#157Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Dilip Kumar (#147)
#158Kyotaro Horiguchi
horikyota.ntt@gmail.com
In reply to: Amit Kapila (#155)
#159Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#157)
#160Amit Kapila
amit.kapila16@gmail.com
In reply to: Kyotaro Horiguchi (#158)
#161vignesh C
vignesh21@gmail.com
In reply to: Dilip Kumar (#147)
#162Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#161)
#163Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#154)
#164Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#159)
#165Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#164)
#166Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#165)
#167Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Dilip Kumar (#150)
#168Dilip Kumar
dilipbalaut@gmail.com
In reply to: Tomas Vondra (#167)
#169Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#153)
#170Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#168)
#171Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#166)
#172Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#170)
#173Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#163)
#174Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#168)
#175Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#174)
#176Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#169)
#177Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#176)
#178Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#177)
#179Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#178)
#180Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#179)
#181Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#180)
#182Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#181)
#183Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#182)
#184Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#183)
#185Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#184)
#186Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#185)
#187Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#178)
#188Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#176)
#189Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#170)
#190Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tomas Vondra (#167)
#191Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Dilip Kumar (#187)
#192Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#191)
#193Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#192)
#194Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#185)
#195Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#194)
#196Dilip Kumar
dilipbalaut@gmail.com
In reply to: Alvaro Herrera (#192)
#197Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Dilip Kumar (#196)
#198Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#195)
#199Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#176)
#200Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Dilip Kumar (#199)
#201Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#200)
#202Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#198)
#203Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#202)
#204Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#203)
#205Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#204)
#206Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#205)
#207Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#206)
#208Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#207)
#209Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#187)
#210Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#209)
#211Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#210)
#212Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#210)
#213Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#189)
#214Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#213)
#215Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#204)
#216Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#215)
#217Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#209)
#218Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#211)
#219Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#216)
#220Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#218)
#221Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#217)
#222Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#220)
#223Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#222)
#224Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#223)
#225Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#220)
#226Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#225)
#227Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#226)
#228Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Dilip Kumar (#227)
#229Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#228)
#230Dilip Kumar
dilipbalaut@gmail.com
In reply to: Tomas Vondra (#228)
#231Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#228)
#232Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#231)
#233Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#232)
#234Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#231)
#235Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Dilip Kumar (#230)
#236Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#234)
#237Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#230)
#238Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#237)
#239Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#238)
#240Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#239)
#241Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#240)
#242Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#241)
#243Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#242)
#244Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#243)
#245Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#244)
#246Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#245)
#247Dilip Kumar
dilipbalaut@gmail.com
In reply to: Tomas Vondra (#246)
#248Kuntal Ghosh
kuntalghosh.2007@gmail.com
In reply to: Dilip Kumar (#247)
#249Dilip Kumar
dilipbalaut@gmail.com
In reply to: Kuntal Ghosh (#248)
#250Kuntal Ghosh
kuntalghosh.2007@gmail.com
In reply to: Dilip Kumar (#249)
#251Dilip Kumar
dilipbalaut@gmail.com
In reply to: Kuntal Ghosh (#250)
#252Kuntal Ghosh
kuntalghosh.2007@gmail.com
In reply to: Dilip Kumar (#251)
#253Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#246)
#254Amit Kapila
amit.kapila16@gmail.com
In reply to: Kuntal Ghosh (#250)
#255Dilip Kumar
dilipbalaut@gmail.com
In reply to: Kuntal Ghosh (#252)
#256Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#254)
#257Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#256)
#258Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#257)
#259Erik Rijkers
er@xs4all.nl
In reply to: Dilip Kumar (#255)
#260Dilip Kumar
dilipbalaut@gmail.com
In reply to: Erik Rijkers (#259)
#261Dilip Kumar
dilipbalaut@gmail.com
In reply to: Erik Rijkers (#259)
#262Erik Rijkers
er@xs4all.nl
In reply to: Dilip Kumar (#261)
#263Kuntal Ghosh
kuntalghosh.2007@gmail.com
In reply to: Dilip Kumar (#255)
#264Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Dilip Kumar (#249)
#265Erik Rijkers
er@xs4all.nl
In reply to: Erik Rijkers (#262)
#266Erik Rijkers
er@xs4all.nl
In reply to: Erik Rijkers (#265)
#267Erik Rijkers
er@xs4all.nl
In reply to: Erik Rijkers (#266)
#268Dilip Kumar
dilipbalaut@gmail.com
In reply to: Erik Rijkers (#267)
#269Dilip Kumar
dilipbalaut@gmail.com
In reply to: Erik Rijkers (#267)
#270Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#269)
#271Erik Rijkers
er@xs4all.nl
In reply to: Dilip Kumar (#270)
#272Dilip Kumar
dilipbalaut@gmail.com
In reply to: Erik Rijkers (#271)
#273Erik Rijkers
er@xs4all.nl
In reply to: Dilip Kumar (#272)
#274Dilip Kumar
dilipbalaut@gmail.com
In reply to: Erik Rijkers (#273)
#275Dilip Kumar
dilipbalaut@gmail.com
In reply to: Kuntal Ghosh (#263)
#276Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#275)
#277Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#276)
#278Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#275)
#279Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#278)
#280Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#279)
#281Mahendra Singh Thalor
mahi6run@gmail.com
In reply to: Dilip Kumar (#274)
#282Mahendra Singh Thalor
mahi6run@gmail.com
In reply to: Mahendra Singh Thalor (#281)
#283Dilip Kumar
dilipbalaut@gmail.com
In reply to: Mahendra Singh Thalor (#282)
#284Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#279)
#285Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#284)
#286Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#285)
#287Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#286)
#288Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#287)
#289Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#288)
#290Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#289)
#291Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#290)
#292Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#291)
#293Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#288)
#294Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#293)
#295Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#294)
#296Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#295)
#297Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#296)
#298Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#297)
#299Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#298)
#300Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#299)
#301Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#296)
#302Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#298)
#303Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#301)
#304Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#303)
#305Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#304)
#306Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#305)
#307Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#303)
#308Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#307)
#309Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#308)
#310Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#309)
#311Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#310)
#312Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#311)
#313Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#301)
#314Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#302)
#315Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#314)
#316Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#315)
#317Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#309)
#318Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#308)
#319Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#312)
#320Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#313)
#321Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#314)
#322Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#315)
#323Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#316)
#324Erik Rijkers
er@xs4all.nl
In reply to: Dilip Kumar (#322)
#325Dilip Kumar
dilipbalaut@gmail.com
In reply to: Erik Rijkers (#324)
#326Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#317)
#327Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#318)
#328Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#326)
#329Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#322)
#330Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#328)
#331Mahendra Singh Thalor
mahi6run@gmail.com
In reply to: Amit Kapila (#330)
#332Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#325)
#333Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#327)
#334Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#329)
#335Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#333)
#336Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#334)
#337Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#336)
#338Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#335)
#339Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#332)
#340Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#332)
#341Amit Kapila
amit.kapila16@gmail.com
In reply to: Mahendra Singh Thalor (#331)
#342Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#340)
#343Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#335)
#344Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#330)
#345Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#329)
#346Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#339)
#347Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#346)
#348Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#347)
#349Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#348)
#350Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#342)
#351Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#349)
#352Mahendra Singh Thalor
mahi6run@gmail.com
In reply to: Amit Kapila (#341)
#353Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#342)
#354Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#351)
#355Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#353)
#356Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#355)
#357Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#356)
#358Amit Kapila
amit.kapila16@gmail.com
In reply to: Mahendra Singh Thalor (#352)
#359Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#358)
#360Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#354)
#361Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#360)
#362Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#361)
#363Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#362)
#364Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#363)
#365Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#364)
#366Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#365)
#367Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#366)
#368Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#367)
#369Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#357)
#370Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#369)
#371Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#370)
#372Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#368)
#373Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#356)
#374Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#357)
#375Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#367)
#376Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#368)
#377Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#373)
#378Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#377)
#379Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#377)
#380Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#379)
#381Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#380)
#382Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#381)
#383Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#382)
#384Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#383)
#385Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#384)
#386Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#376)
#387Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#385)
#388Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#387)
#389Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#385)
#390Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#389)
#391Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#390)
#392Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#389)
#393Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#385)
#394Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#392)
#395Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#378)
#396Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#395)
#397Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#396)
#398Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#397)
#399Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#398)
#400Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#398)
#401Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#399)
#402Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#397)
#403Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#401)
#404Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#403)
#405Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#404)
#406Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#405)
#407Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#402)
#408Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#407)
#409Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#408)
#410Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#409)
#411Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#410)
#412Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#411)
#413Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#412)
#414Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#413)
#415Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#408)
#416Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#415)
#417Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#414)
#418Dilip Kumar
dilipbalaut@gmail.com
In reply to: Ajin Cherian (#417)
#419Ajin Cherian
itsajin@gmail.com
In reply to: Dilip Kumar (#418)
#420Dilip Kumar
dilipbalaut@gmail.com
In reply to: Ajin Cherian (#419)
#421Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#399)
#422Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#404)
#423Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#421)
#424Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#423)
#425Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#422)
#426Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#424)
#427Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#425)
#428Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#426)
#429Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#428)
#430Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#429)
#431Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#430)
#432Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#431)
#433Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#432)
#434Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#433)
#435Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#431)
#436Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#425)
#437Ajin Cherian
itsajin@gmail.com
In reply to: Dilip Kumar (#436)
#438Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#437)
#439Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#434)
#440Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#439)
#441Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#440)
#442Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#441)
#443Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#442)
#444Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#443)
#445Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#444)
#446Ajin Cherian
itsajin@gmail.com
In reply to: Dilip Kumar (#445)
#447Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#445)
#448Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#447)
#449Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#448)
#450Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#449)
#451Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#450)
#452Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#451)
#453Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#452)
#454Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#453)
#455Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#454)
#456Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#455)
#457Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#455)
#458Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#457)
#459Ajin Cherian
itsajin@gmail.com
In reply to: Dilip Kumar (#458)
#460Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#459)
#461Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#460)
#462Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#458)
#463Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#462)
#464Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#463)
#465Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#464)
#466Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#465)
#467Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#466)
#468Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#467)
#469Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#468)
#470Thomas Munro
thomas.munro@gmail.com
In reply to: Amit Kapila (#468)
#471Amit Kapila
amit.kapila16@gmail.com
In reply to: Thomas Munro (#470)
#472Thomas Munro
thomas.munro@gmail.com
In reply to: Amit Kapila (#471)
#473Amit Kapila
amit.kapila16@gmail.com
In reply to: Thomas Munro (#472)
#474Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#468)
#475Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#469)
#476Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#475)
#477Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#476)
#478Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#477)
#479Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#477)
#480Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#478)
#481Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#480)
#482Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#481)
#483Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#482)
#484Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#483)
#485Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#484)
#486Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#484)
#487Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#484)
#488Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#486)
#489Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#487)
#490Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#489)
#491Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#490)
#492Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#491)
#493Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#492)
#494Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#493)
#495Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#494)
#496Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#495)
#497Jeff Janes
jeff.janes@gmail.com
In reply to: Amit Kapila (#495)
#498Amit Kapila
amit.kapila16@gmail.com
In reply to: Jeff Janes (#497)
#499Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#498)
#500Neha Sharma
neha.sharma@enterprisedb.com
In reply to: Amit Kapila (#498)
#501Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#499)
#502Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#501)
#503Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#502)
#504Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#503)
#505Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#504)
#506Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#504)
#507Neha Sharma
neha.sharma@enterprisedb.com
In reply to: Amit Kapila (#504)
#508Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#506)
#509Amit Kapila
amit.kapila16@gmail.com
In reply to: Neha Sharma (#507)
#510Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#508)
#511Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#510)
#512Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#511)
#513Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#510)
#514Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#512)
#515Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#514)
#516Nathan Bossart
nathandbossart@gmail.com
In reply to: Dilip Kumar (#515)
#517Amit Kapila
amit.kapila16@gmail.com
In reply to: Nathan Bossart (#516)
#518Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#510)
#519Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#518)
#520Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#519)
#521Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#520)
#522Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#521)
#523Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#522)
#524Dilip Kumar
dilipbalaut@gmail.com
In reply to: Tomas Vondra (#523)
#525Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#523)
#526Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#525)
#527Tom Lane
tgl@sss.pgh.pa.us
In reply to: Amit Kapila (#526)
#528Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#527)
#529Amit Kapila
amit.kapila16@gmail.com
In reply to: Tom Lane (#527)
#530Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#528)
#531Amit Kapila
amit.kapila16@gmail.com
In reply to: Tom Lane (#527)
#532Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#531)
#533Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#532)
#534Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#533)
#535Tom Lane
tgl@sss.pgh.pa.us
In reply to: Amit Kapila (#533)
#536Amit Kapila
amit.kapila16@gmail.com
In reply to: Tom Lane (#535)
#537Tom Lane
tgl@sss.pgh.pa.us
In reply to: Amit Kapila (#536)
#538Amit Kapila
amit.kapila16@gmail.com
In reply to: Tom Lane (#537)
#539Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#538)
#540Noah Misch
noah@leadboat.com
In reply to: Amit Kapila (#520)
#541Amit Kapila
amit.kapila16@gmail.com
In reply to: Noah Misch (#540)
#542Amit Kapila
amit.kapila16@gmail.com
In reply to: Noah Misch (#540)
#543Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#542)
#544Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#543)
#545Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#543)
#546Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#545)
#547Noah Misch
noah@leadboat.com
In reply to: Amit Kapila (#546)
#548Amit Kapila
amit.kapila16@gmail.com
In reply to: Noah Misch (#547)
#549Tom Lane
tgl@sss.pgh.pa.us
In reply to: Amit Kapila (#548)
#550Noah Misch
noah@leadboat.com
In reply to: Tom Lane (#549)
#551Peter Eisentraut
peter_e@gmx.net
In reply to: Tom Lane (#549)
#552Amit Kapila
amit.kapila16@gmail.com
In reply to: Noah Misch (#550)
#553Tom Lane
tgl@sss.pgh.pa.us
In reply to: Noah Misch (#550)
#554Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#553)
#555Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#554)
#556Noah Misch
noah@leadboat.com
In reply to: Tom Lane (#555)
#557Tom Lane
tgl@sss.pgh.pa.us
In reply to: Noah Misch (#556)
#558Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#548)
#559Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#558)
#560Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#559)
#561Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#560)
#562Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#561)
#563Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#562)