VM corruption on standby

Started by Andrey Borodin7 months ago79 messages
Jump to latest
#1Andrey Borodin
amborodin@acm.org

Hi hackers!

I was reviewing the patch about removing xl_heap_visible and found the VM\WAL machinery very interesting.
At Yandex we had several incidents with corrupted VM and on pgconf.dev colleagues from AWS confirmed that they saw something similar too.
So I toyed around and accidentally wrote a test that reproduces $subj.

I think the corruption happens as follows:
0. we create a table with one frozen tuple
1. next heap_insert() clears VM bit and hangs immediately, nothing was logged yet
2. VM buffer is flushed on disk with checkpointer or bgwriter
3. primary is killed with -9
now we have a page that is ALL_VISIBLE\ALL_FORZEN on standby, but clear VM bits on primary
4. subsequent insert does not set XLH_LOCK_ALL_FROZEN_CLEARED in it's WAL record
5. pg_visibility detects corruption

Interestingly, in an off-list conversation Melanie explained me how ALL_VISIBLE is protected from this: WAL-logging depends on PD_ALL_VISIBLE heap page bit, not a state of the VM. But for ALL_FROZEN this is not a case:

/* Clear only the all-frozen bit on visibility map if needed */
if (PageIsAllVisible(page) &&
visibilitymap_clear(relation, block, vmbuffer,
VISIBILITYMAP_ALL_FROZEN))
cleared_all_frozen = true; // this won't happen due to flushed VM buffer before a crash

Anyway, the test reproduces corruption of both bits. And also reproduces selecting deleted data on standby.

The test is not intended to be committed when we fix the problem, so some waits are simulated with sleep(1) and test is placed at modules/test_slru where it was easier to write. But if we ever want something like this - I can design a less hacky version. And, probably, more generic.

Thanks!

Best regards, Andrey Borodin.

Attachments:

v1-0001-Corrupt-VM-on-standby.patchapplication/octet-stream; name=v1-0001-Corrupt-VM-on-standby.patch; x-unix-mode=0644Download+128-91
#2Aleksander Alekseev
aleksander@timescale.com
In reply to: Andrey Borodin (#1)
Re: VM corruption on standby

Hi Andrey,

I was reviewing the patch about removing xl_heap_visible and found the VM\WAL machinery very interesting.
At Yandex we had several incidents with corrupted VM and on pgconf.dev colleagues from AWS confirmed that they saw something similar too.
So I toyed around and accidentally wrote a test that reproduces $subj.

I think the corruption happens as follows:
0. we create a table with one frozen tuple
1. next heap_insert() clears VM bit and hangs immediately, nothing was logged yet
2. VM buffer is flushed on disk with checkpointer or bgwriter
3. primary is killed with -9
now we have a page that is ALL_VISIBLE\ALL_FORZEN on standby, but clear VM bits on primary
4. subsequent insert does not set XLH_LOCK_ALL_FROZEN_CLEARED in it's WAL record
5. pg_visibility detects corruption

Interestingly, in an off-list conversation Melanie explained me how ALL_VISIBLE is protected from this: WAL-logging depends on PD_ALL_VISIBLE heap page bit, not a state of the VM. But for ALL_FROZEN this is not a case:

/* Clear only the all-frozen bit on visibility map if needed */
if (PageIsAllVisible(page) &&
visibilitymap_clear(relation, block, vmbuffer,
VISIBILITYMAP_ALL_FROZEN))
cleared_all_frozen = true; // this won't happen due to flushed VM buffer before a crash

Anyway, the test reproduces corruption of both bits. And also reproduces selecting deleted data on standby.

Great find. I executed your test on a pretty much regular Linux x64
machine and indeed it failed:

```
not ok 1 - pg_check_frozen() observes corruption
not ok 2 - pg_check_visible() observes corruption
not ok 3 - deleted data returned by select
1..3
# test failed
----------------------------------- stderr -----------------------------------
# Failed test 'pg_check_frozen() observes corruption'
# at /home/eax/projects/c/postgresql/src/test/modules/test_slru/t/001_multixact.pl
line 110.
# got: '(0,2)
# (0,3)
# (0,4)'
# expected: ''
# Failed test 'pg_check_visible() observes corruption'
# at /home/eax/projects/c/postgresql/src/test/modules/test_slru/t/001_multixact.pl
line 111.
# got: '(0,2)
# (0,4)'
# expected: ''
# Failed test 'deleted data returned by select'
# at /home/eax/projects/c/postgresql/src/test/modules/test_slru/t/001_multixact.pl
line 112.
# got: '2'
# expected: ''
# Looks like you failed 3 tests of 3.
```

This is a tricky bug. Do you also have a proposal of a particular fix?

The test is not intended to be committed when we fix the problem, so some waits are simulated with sleep(1) and test is placed at modules/test_slru where it was easier to write. But if we ever want something like this - I can design a less hacky version. And, probably, more generic.

IMO - yes, we do need this regression test.

#3Aleksander Alekseev
aleksander@timescale.com
In reply to: Aleksander Alekseev (#2)
Re: VM corruption on standby

Hi,

This is a tricky bug. Do you also have a proposal of a particular fix?

If my understanding is correct, we should make a WAL record with the
XLH_LOCK_ALL_FROZEN_CLEARED flag *before* we modify the VM but within
the same critical section (in order to avoid race conditions within
the same backend).

A draft patch is attached. It makes the test pass and doesn't seem to
break any other tests.

Thoughts?

Attachments:

v2-0001-Corrupt-VM-on-standby.patchtext/x-patch; charset=US-ASCII; name=v2-0001-Corrupt-VM-on-standby.patchDownload+128-91
v2-0002-Bugfix-TODO-FIXME-write-a-better-message.patchtext/x-patch; charset=US-ASCII; name=v2-0002-Bugfix-TODO-FIXME-write-a-better-message.patchDownload+15-5
#4Aleksander Alekseev
aleksander@timescale.com
In reply to: Aleksander Alekseev (#3)
Re: VM corruption on standby

This is a tricky bug. Do you also have a proposal of a particular fix?

If my understanding is correct, we should make a WAL record with the
XLH_LOCK_ALL_FROZEN_CLEARED flag *before* we modify the VM but within
the same critical section (in order to avoid race conditions within
the same backend).

I meant instance, not backend. Sorry for confusion.

Show quoted text

A draft patch is attached. It makes the test pass and doesn't seem to
break any other tests.

#5Aleksander Alekseev
aleksander@timescale.com
In reply to: Aleksander Alekseev (#4)
Re: VM corruption on standby

Hi again,

I meant instance, not backend. Sorry for confusion.

It looks like I completely misunderstood what START_CRIT_SECTION() /
END_CRIT_SECTION() are for here. Simply ignore this part :) Apologies
for the noise.

#6Aleksander Alekseev
aleksander@timescale.com
In reply to: Aleksander Alekseev (#3)
Re: VM corruption on standby

Hi,

If my understanding is correct, we should make a WAL record with the
XLH_LOCK_ALL_FROZEN_CLEARED flag *before* we modify the VM but within
the same critical section [...]

A draft patch is attached. It makes the test pass and doesn't seem to
break any other tests.

Thoughts?

In order not to forget - assuming I'm not wrong about the cause of the
issue, we might want to recheck the order of visibilitymap_* and XLog*
calls in the following functions too:

- heap_multi_insert
- heap_delete
- heap_update
- heap_lock_tuple
- heap_lock_updated_tuple_rec

By a quick look all named functions modify the VM before making a
corresponding WAL record. This can cause a similar issue:

1. VM modified
2. evicted asynchronously before logging
3. kill 9
4. different state of VM on primary and standby

#7Andrey Borodin
amborodin@acm.org
In reply to: Aleksander Alekseev (#3)
Re: VM corruption on standby

On 7 Aug 2025, at 17:09, Aleksander Alekseev <aleksander@tigerdata.com> wrote:

If my understanding is correct, we should make a WAL record with the
XLH_LOCK_ALL_FROZEN_CLEARED flag *before* we modify the VM but within
the same critical section (in order to avoid race conditions within
the same backend).

Well, the test passes because you moved injection point to a very safe position. I can't comment anything on other aspects of moving visibilitymap_clear() around.
The approach seems viable to me, but I'd like to have understanding why PD_ALL_VISIBLE in a heap page header did not save the day before fixing anything.

Best regards, Andrey Borodin.

#8Andrey Borodin
amborodin@acm.org
In reply to: Andrey Borodin (#7)
Re: VM corruption on standby

On 7 Aug 2025, at 18:54, Andrey Borodin <x4mmm@yandex-team.ru> wrote:

moved injection point to a very safe position.

BTW, your fix also fixes ALL_FROZEN stuff, just because WAL for heap insert is already emitted by the time of -9.

I want to emphasize that it seems to me that position of injection point is not a hint, but rather coincidental.

I concur that all other users of visibilitymap_clear() likely will need to be fixed. But only when we have a good picture what exactly is broken.

Best regards, Andrey Borodin.

#9Aleksander Alekseev
aleksander@timescale.com
In reply to: Andrey Borodin (#8)
Re: VM corruption on standby

Hi Andrey,

the test passes because you moved injection point to a very safe position
[...]
I want to emphasize that it seems to me that position of injection point is not a hint, but rather coincidental.

Well, I wouldn't say that the test passes merely because the location
of the injection point was moved.

For sure it was moved, because the visibilitymap_clear() call was
moved. Maybe I misunderstood the intent of the test. Wasn't it to call
the injection point right after updating the VM? I tried to place it
between updating the WAL and updating the VM and the effect was the
same - the test still passes.

In any case we can place it anywhere we want to if we agree to include
the test into the final version of the patch.

I concur that all other users of visibilitymap_clear() likely will need to be fixed.

Right, I realized there are a few places besides heapam.c that might
need a change.

The approach seems viable to me, but I'd like to have understanding why PD_ALL_VISIBLE in a heap page header did not save the day before fixing anything
... But only when we have a good picture what exactly is broken.

Agree. I especially would like to know the opinion of somebody who's
been hacking Postgres longer than I did. Perhaps there was a good
reason to update the VM *before* creating WAL records I'm unaware of.

#10Andrey Borodin
amborodin@acm.org
In reply to: Aleksander Alekseev (#9)
Re: VM corruption on standby

On 7 Aug 2025, at 19:36, Aleksander Alekseev <aleksander@tigerdata.com> wrote:

Maybe I misunderstood the intent of the test.

You understood precisely my intent of writing the test. But it fail not due to a bug I anticipated!

So far I noticed that if I move injection point before

PageClearAllVisible(BufferGetPage(buffer));

or after writing WAL - test passes.

Also I investigated that in a moment of kill -9 checkpointer flushes heap page to disk despite content lock. I haven't found who released content lock though.

Best regards, Andrey Borodin.

#11Andrey Borodin
amborodin@acm.org
In reply to: Andrey Borodin (#10)
Re: VM corruption on standby

On 9 Aug 2025, at 18:28, Andrey Borodin <x4mmm@yandex-team.ru> wrote:

Also I investigated that in a moment of kill -9 checkpointer flushes heap page to disk despite content lock. I haven't found who released content lock though.

I've written this message and understood: its LWLockReleaseAll().

0. checkpointer is going to flush a heap buffer but waits on content lock
1. client is resetting PD_ALL_VISIBLE from page
2. postmaster is killed and command client to go down
3. client calls LWLockReleaseAll() at ProcKill() (?)
4. checkpointer flushes buffer with reset PG_ALL_VISIBLE that is not WAL-logged to standby
5. subsequent deletes do not log resetting this bit
6. deleted data is observable on standby with IndexOnlyScan

Any idea how to fix this?

Best regards, Andrey Borodin.

#12Kirill Reshke
reshkekirill@gmail.com
In reply to: Aleksander Alekseev (#9)
Re: VM corruption on standby

On Thu, 7 Aug 2025 at 21:36, Aleksander Alekseev
<aleksander@tigerdata.com> wrote:

Perhaps there was a good
reason to update the VM *before* creating WAL records I'm unaware of.

Looks like 503c730 intentionally does it this way; however, I have not
yet fully understood the reasoning behind it.

--
Best regards,
Kirill Reshke

#13Aleksander Alekseev
aleksander@timescale.com
In reply to: Andrey Borodin (#11)
Re: VM corruption on standby

Hi Andrey,

0. checkpointer is going to flush a heap buffer but waits on content lock
1. client is resetting PD_ALL_VISIBLE from page
2. postmaster is killed and command client to go down
3. client calls LWLockReleaseAll() at ProcKill() (?)
4. checkpointer flushes buffer with reset PG_ALL_VISIBLE that is not WAL-logged to standby
5. subsequent deletes do not log resetting this bit
6. deleted data is observable on standby with IndexOnlyScan

Thanks for investigating this in more detail. If this is indeed what
happens it is a violation of the "log before changing" approach. For
this reason we have PageHeaderData.pd_lsn for instance - to make sure
pages are evicted only *after* the record that changed it is written
to disk (because WAL records can't be applied to pages from the
future).

I guess the intent here could be to do an optimization of some sort
but the facts that 1. the instance can be killed at any time and 2.
there might be replicas - were not considered.

Any idea how to fix this?

IMHO: logging the changes first, then allowing to evict the page.

#14Kirill Reshke
reshkekirill@gmail.com
In reply to: Aleksander Alekseev (#13)
Re: VM corruption on standby

On Sun, 10 Aug 2025 at 01:55, Aleksander Alekseev
<aleksander@tigerdata.com> wrote:

For this reason we have PageHeaderData.pd_lsn for instance - to make sure
pages are evicted only *after* the record that changed it is written
to disk (because WAL records can't be applied to pages from the
future).

We don't bump the LSN of the heap page when setting the visibility
map bit.

I guess the intent here could be to do an optimization of some sort
but the facts that 1. the instance can be killed at any time and 2.
there might be replicas - were not considered.

IMHO: logging the changes first, then allowing to evict the page.

Clearing the vm before the logging changes was intentional [0]/messages/by-id/BANLkTimuLk4RHXSQHEEiYGbxiXp2mh5KCA@mail.gmail.com.
So I assume we should not change the approach, but rather just tweak
things a bit to make the whole thing work.

[0]: /messages/by-id/BANLkTimuLk4RHXSQHEEiYGbxiXp2mh5KCA@mail.gmail.com

--
Best regards,
Kirill Reshke

#15Andrey Borodin
amborodin@acm.org
In reply to: Aleksander Alekseev (#13)
Re: VM corruption on standby

On 9 Aug 2025, at 23:54, Aleksander Alekseev <aleksander@tigerdata.com> wrote:

IMHO: logging the changes first, then allowing to evict the page.

VM and BufferManager code does not allow flush of a buffer until changes are logged.
The problem is that our crash-exiting system destroys locks that protect buffer from being flushed.

Best regards, Andrey Borodin.

#16Kirill Reshke
reshkekirill@gmail.com
In reply to: Andrey Borodin (#1)
Re: VM corruption on standby

On Wed, 6 Aug 2025 at 20:00, Andrey Borodin <x4mmm@yandex-team.ru> wrote:

Hi hackers!

I was reviewing the patch about removing xl_heap_visible and found the VM\WAL machinery very interesting.
At Yandex we had several incidents with corrupted VM and on pgconf.dev colleagues from AWS confirmed that they saw something similar too.
So I toyed around and accidentally wrote a test that reproduces $subj.

I think the corruption happens as follows:
0. we create a table with one frozen tuple
1. next heap_insert() clears VM bit and hangs immediately, nothing was logged yet
2. VM buffer is flushed on disk with checkpointer or bgwriter
3. primary is killed with -9
now we have a page that is ALL_VISIBLE\ALL_FORZEN on standby, but clear VM bits on primary
4. subsequent insert does not set XLH_LOCK_ALL_FROZEN_CLEARED in it's WAL record
5. pg_visibility detects corruption

Interestingly, in an off-list conversation Melanie explained me how ALL_VISIBLE is protected from this: WAL-logging depends on PD_ALL_VISIBLE heap page bit, not a state of the VM. But for ALL_FROZEN this is not a case:

/* Clear only the all-frozen bit on visibility map if needed */
if (PageIsAllVisible(page) &&
visibilitymap_clear(relation, block, vmbuffer,
VISIBILITYMAP_ALL_FROZEN))
cleared_all_frozen = true; // this won't happen due to flushed VM buffer before a crash

Anyway, the test reproduces corruption of both bits. And also reproduces selecting deleted data on standby.

The test is not intended to be committed when we fix the problem, so some waits are simulated with sleep(1) and test is placed at modules/test_slru where it was easier to write. But if we ever want something like this - I can design a less hacky version. And, probably, more generic.

Thanks!

Best regards, Andrey Borodin.

Attached reproduces the same but without any standby node. CHECKPOINT
somehow manages to flush the heap page when instance kill-9-ed.
As a result, we have inconsistency between heap and VM pages:

```
reshke=# select * from pg_visibility('x');
blkno | all_visible | all_frozen | pd_all_visible
-------+-------------+------------+----------------
0 | t | t | f
(1 row)
```

Notice I moved INJECTION point one line above visibilitymap_clear.
Without this change, such behaviour also reproduced, but with much
less frequency.

--
Best regards,
Kirill Reshke

Attachments:

v2-0001-Corrupt-VM-on-standby.patchapplication/octet-stream; name=v2-0001-Corrupt-VM-on-standby.patchDownload+120-92
#17Kirill Reshke
reshkekirill@gmail.com
In reply to: Kirill Reshke (#16)
Re: VM corruption on standby

On Tue, 12 Aug 2025 at 10:38, I wrote:

CHECKPOINT
somehow manages to flush the heap page when instance kill-9-ed.

This corruption does not reproduce without CHECKPOINT call, however I
do not see any suspicious syscal that CHECKPOINT's process does.
It does not write anything to disk here, isn’t ? PFA strace.

--
Best regards,
Kirill Reshke

Attachments:

checkpointer.strace.txttext/plain; charset=UTF-8; name=checkpointer.strace.txtDownload
#18Kirill Reshke
reshkekirill@gmail.com
In reply to: Andrey Borodin (#1)
Re: VM corruption on standby

On Wed, 6 Aug 2025 at 20:00, Andrey Borodin <x4mmm@yandex-team.ru> wrote:

Hi hackers!

I was reviewing the patch about removing xl_heap_visible and found the VM\WAL machinery very interesting.
At Yandex we had several incidents with corrupted VM and on pgconf.dev colleagues from AWS confirmed that they saw something similar too.

While this aims to find existing VM corruption (i mean, in PG <= 17),
this reproducer does not seem to work on pg17. At least, I did not
manage to reproduce this scenario on pg17.

This makes me think this exact corruption may be pg18-only. Is it
possible that AIO is somehow involved here?

--
Best regards,
Kirill Reshke

#19Kirill Reshke
reshkekirill@gmail.com
In reply to: Kirill Reshke (#18)
Re: VM corruption on standby

On Tue, 12 Aug 2025 at 13:00, I wrote:

While this aims to find existing VM corruption (i mean, in PG <= 17),
this reproducer does not seem to work on pg17. At least, I did not
manage to reproduce this scenario on pg17.

This makes me think this exact corruption may be pg18-only. Is it
possible that AIO is somehow involved here?

First of all, the "corruption" is reproducible with io_method = sync,
so AIO is not under suspicion.
Then, I did a gdb session many times and I ended up with the
conclusion that this test is NOT a valid corruption reproducer.
So, the thing is, when you involve injection point logic, due how inj
points are implemented, you allow postgres to enter WaitLatch
function, which
has its own logic for postmaster death handling[0]https://github.com/postgres/postgres/blob/393e0d2314050576c9c039853fdabe7f685a4f47/src/backend/storage/ipc/waiteventset.c#L1260-L1261.

So, when we add injection point here, we allow this sequence of events
to happen:

1) INSERT enters `heap_insert`, modifies HEAP page, goes to inj point and hangs.
2) CHECKPOINT process tries to FLUSH this page and wiats
3) kill -9 to postmaster
4) INSERT wakes up on postmaster death, goes to [0]https://github.com/postgres/postgres/blob/393e0d2314050576c9c039853fdabe7f685a4f47/src/backend/storage/ipc/waiteventset.c#L1260-L1261 and releases all locks.
5) CHECKPOINT-er flushes the HEAP page to disk, causing corruption.

The thing is, this execution will NOT happen without inj points.

So, overall, injection points are not suitable for this critical
section testing (i think).

==
Off-list Andrey send me this patch:

```
diff --git a/src/backend/storage/ipc/waiteventset.c
b/src/backend/storage/ipc/waiteventset.c
index 7c0e66900f9..e89e1d115cb 100644
--- a/src/backend/storage/ipc/waiteventset.c
+++ b/src/backend/storage/ipc/waiteventset.c
@@ -1044,6 +1044,7 @@ WaitEventSetWait(WaitEventSet *set, long timeout,
        long            cur_timeout = -1;

Assert(nevents > 0);
+ Assert(CritSectionCount == 0);

/*
* Initialize timeout if requested. We must record the current time so
```

The objective is to confirm our assumption that the WaitEventSetWait
call ought not to occur during critical sections. This patch causes
`make check` to fail, indicating that this assumption is incorrect.
Assertion breaks due to AdvanceXLInsertBuffer call (which uses
condvar logic) inside XlogInsertRecord.

I did not find any doc or other piece of information indicating
whether WaitEventSetWait and critical sections are allowed. But I do
thing this is bad, because we do not process interruptions during
critical sections, so it is unclear to me why we should handle
postmaster death any differently.

[0]: https://github.com/postgres/postgres/blob/393e0d2314050576c9c039853fdabe7f685a4f47/src/backend/storage/ipc/waiteventset.c#L1260-L1261

--
Best regards,
Kirill Reshke

#20Kirill Reshke
reshkekirill@gmail.com
In reply to: Kirill Reshke (#19)
Re: VM corruption on standby

On Wed, 13 Aug 2025 at 16:15, I wrote:

I did not find any doc or other piece of information indicating
whether WaitEventSetWait and critical sections are allowed. But I do
thing this is bad, because we do not process interruptions during
critical sections, so it is unclear to me why we should handle
postmaster death any differently.

Maybe I'm very wrong about this, but I'm currently suspecting there is
corruption involving CHECKPOINT, process in CRIT section and kill -9.

The scenario I am trying to reproduce is following:

1) Some process p1 locks some buffer (name it buf1), enters CRIT
section, calls MarkBufferDirty and hangs inside XLogInsert on CondVar
in (GetXLogBuffer -> AdvanceXLInsertBuffer).
2) CHECKPOINT (p2) stars and tries to FLUSH dirty buffers, awaiting lock on buf1
3) Postmaster kill-9-ed
4) signal of postmaster death delivered to p1, it wakes up in
WaitLatch/WaitEventSetWaitBlock functions, checks postmaster
aliveness, and exits releasing all locks.
5) p2 acquires locks on buf1 and flushes it to disk.
6) signal of postmaster death delivered to p2, p2 exits.

And we now have a case when the buffer is flushed to disk, while the
xlog record that describes this change never makes it to disk. This is
very bad.

To be clear, I am trying to avoid use of inj points to reproduce
corruption. I am not yet successful in this though.

--
Best regards,
Kirill Reshke

#21Kirill Reshke
reshkekirill@gmail.com
In reply to: Kirill Reshke (#20)
#22Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kirill Reshke (#21)
#23Andrey Borodin
amborodin@acm.org
In reply to: Tom Lane (#22)
#24Kirill Reshke
reshkekirill@gmail.com
In reply to: Tom Lane (#22)
#25Kirill Reshke
reshkekirill@gmail.com
In reply to: Kirill Reshke (#24)
#26Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kirill Reshke (#24)
#27Thomas Munro
thomas.munro@gmail.com
In reply to: Tom Lane (#26)
#28Kirill Reshke
reshkekirill@gmail.com
In reply to: Thomas Munro (#27)
#29Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas Munro (#27)
#30Kirill Reshke
reshkekirill@gmail.com
In reply to: Tom Lane (#29)
#31Kirill Reshke
reshkekirill@gmail.com
In reply to: Thomas Munro (#27)
#32Kirill Reshke
reshkekirill@gmail.com
In reply to: Kirill Reshke (#31)
#33Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Kirill Reshke (#32)
#34Yura Sokolov
y.sokolov@postgrespro.ru
In reply to: Kirill Reshke (#14)
#35Yura Sokolov
y.sokolov@postgrespro.ru
In reply to: Kirill Reshke (#12)
#36Andres Freund
andres@anarazel.de
In reply to: Yura Sokolov (#35)
#37Kirill Reshke
reshkekirill@gmail.com
In reply to: Kirill Reshke (#32)
#38Yura Sokolov
y.sokolov@postgrespro.ru
In reply to: Andres Freund (#36)
#39Yura Sokolov
y.sokolov@postgrespro.ru
In reply to: Kirill Reshke (#37)
#40Kirill Reshke
reshkekirill@gmail.com
In reply to: Yura Sokolov (#39)
#41Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#29)
#42Thomas Munro
thomas.munro@gmail.com
In reply to: Andres Freund (#41)
#43Andres Freund
andres@anarazel.de
In reply to: Thomas Munro (#42)
#44Thomas Munro
thomas.munro@gmail.com
In reply to: Andres Freund (#43)
#45Andres Freund
andres@anarazel.de
In reply to: Thomas Munro (#44)
#46Yura Sokolov
y.sokolov@postgrespro.ru
In reply to: Kirill Reshke (#40)
#47Kirill Reshke
reshkekirill@gmail.com
In reply to: Yura Sokolov (#46)
#48Kirill Reshke
reshkekirill@gmail.com
In reply to: Yura Sokolov (#46)
#49Kirill Reshke
reshkekirill@gmail.com
In reply to: Andres Freund (#45)
#50Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kirill Reshke (#48)
#51Kirill Reshke
reshkekirill@gmail.com
In reply to: Tom Lane (#50)
#52Andrey Borodin
amborodin@acm.org
In reply to: Kirill Reshke (#51)
#53Tom Lane
tgl@sss.pgh.pa.us
In reply to: Kirill Reshke (#31)
#54Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrey Borodin (#52)
#55Michael Paquier
michael@paquier.xyz
In reply to: Tom Lane (#54)
#56Thomas Munro
thomas.munro@gmail.com
In reply to: Tom Lane (#53)
#57Thomas Munro
thomas.munro@gmail.com
In reply to: Thomas Munro (#56)
#58Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas Munro (#56)
#59Kirill Reshke
reshkekirill@gmail.com
In reply to: Tom Lane (#53)
#60Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#58)
#61Michael Paquier
michael@paquier.xyz
In reply to: Andres Freund (#60)
#62Andrey Borodin
amborodin@acm.org
In reply to: Tom Lane (#54)
#63Thomas Munro
thomas.munro@gmail.com
In reply to: Tom Lane (#58)
#64Alexander Korotkov
aekorotkov@gmail.com
In reply to: Tom Lane (#53)
#65Michael Paquier
michael@paquier.xyz
In reply to: Alexander Korotkov (#64)
#66Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alexander Korotkov (#64)
#67Thomas Munro
thomas.munro@gmail.com
In reply to: Alexander Korotkov (#64)
#68Nathan Bossart
nathandbossart@gmail.com
In reply to: Tom Lane (#66)
#69Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#68)
#70Alexander Korotkov
aekorotkov@gmail.com
In reply to: Kirill Reshke (#16)
#71Andrey Borodin
amborodin@acm.org
In reply to: Alexander Korotkov (#70)
#72Alexander Korotkov
aekorotkov@gmail.com
In reply to: Andrey Borodin (#71)
#73Alexander Korotkov
aekorotkov@gmail.com
In reply to: Alexander Korotkov (#72)
#74Andrey Borodin
amborodin@acm.org
In reply to: Alexander Korotkov (#73)
#75Thomas Munro
thomas.munro@gmail.com
In reply to: Andrey Borodin (#74)
#76Michael Paquier
michael@paquier.xyz
In reply to: Thomas Munro (#75)
#77Alexander Korotkov
aekorotkov@gmail.com
In reply to: Thomas Munro (#75)
#78Alexander Korotkov
aekorotkov@gmail.com
In reply to: Thomas Munro (#75)
#79Thomas Munro
thomas.munro@gmail.com
In reply to: Alexander Korotkov (#78)