persist logical slots to disk during shutdown checkpoint

Started by Amit Kapilaover 2 years ago70 messageshackers
Jump to latest
#1Amit Kapila
amit.kapila16@gmail.com

It's entirely possible for a logical slot to have a confirmed_flush
LSN higher than the last value saved on disk while not being marked as
dirty. It's currently not a problem to lose that value during a clean
shutdown / restart cycle but to support the upgrade of logical slots
[1]: /messages/by-id/TYAPR01MB58664C81887B3AF2EB6B16E3F5939@TYAPR01MB5866.jpnprd01.prod.outlook.com
properly persisted to disk. During the upgrade, we need to verify that
all the data prior to shudown_checkpoint for the logical slots has
been consumed, otherwise, the downstream may miss some data. Now, to
ensure the same, we are planning to compare the confirm_flush LSN
location with the latest shudown_checkpoint location which means that
the confirm_flush LSN should be updated after restart.

I think this is inefficient even without an upgrade because, after the
restart, this may lead to decoding some data again. Say, we process
some transactions for which we didn't send anything downstream (the
changes got filtered) but the confirm_flush LSN is updated due to
keepalives. As we don't flush the latest value of confirm_flush LSN,
it may lead to processing the same changes again.

The idea discussed in the thread [1]/messages/by-id/TYAPR01MB58664C81887B3AF2EB6B16E3F5939@TYAPR01MB5866.jpnprd01.prod.outlook.com is to always persist logical
slots to disk during the shutdown checkpoint. I have extracted the
patch to achieve the same from that thread and attached it here. This
could lead to some overhead during shutdown (checkpoint) if there are
many slots but it is probably a one-time work.

I couldn't think of better ideas but another possibility is to mark
the slot as dirty when we update the confirm_flush LSN (see
LogicalConfirmReceivedLocation()). However, that would be a bigger
overhead in the running server as it could be a frequent operation and
could lead to more writes.

Thoughts?

[1]: /messages/by-id/TYAPR01MB58664C81887B3AF2EB6B16E3F5939@TYAPR01MB5866.jpnprd01.prod.outlook.com
[2]: /messages/by-id/TYAPR01MB5866562EF047F2C9DDD1F9DEF51BA@TYAPR01MB5866.jpnprd01.prod.outlook.com

--
With Regards,
Amit Kapila.

Attachments:

v1-0001-Always-persist-to-disk-logical-slots-during-a-sh.patchapplication/octet-stream; name=v1-0001-Always-persist-to-disk-logical-slots-during-a-sh.patchDownload+17-13
#2Julien Rouhaud
rjuju123@gmail.com
In reply to: Amit Kapila (#1)
Re: persist logical slots to disk during shutdown checkpoint

On Sat, 19 Aug 2023, 14:16 Amit Kapila, <amit.kapila16@gmail.com> wrote:

It's entirely possible for a logical slot to have a confirmed_flush
LSN higher than the last value saved on disk while not being marked as
dirty. It's currently not a problem to lose that value during a clean
shutdown / restart cycle but to support the upgrade of logical slots
[1] (see latest patch at [2]), we seem to rely on that value being
properly persisted to disk. During the upgrade, we need to verify that
all the data prior to shudown_checkpoint for the logical slots has
been consumed, otherwise, the downstream may miss some data. Now, to
ensure the same, we are planning to compare the confirm_flush LSN
location with the latest shudown_checkpoint location which means that
the confirm_flush LSN should be updated after restart.

I think this is inefficient even without an upgrade because, after the
restart, this may lead to decoding some data again. Say, we process
some transactions for which we didn't send anything downstream (the
changes got filtered) but the confirm_flush LSN is updated due to
keepalives. As we don't flush the latest value of confirm_flush LSN,
it may lead to processing the same changes again.

In most cases there shouldn't be a lot of records to decode after restart,
but I agree it's better to avoid decoding those again.

The idea discussed in the thread [1] is to always persist logical

slots to disk during the shutdown checkpoint. I have extracted the
patch to achieve the same from that thread and attached it here. This
could lead to some overhead during shutdown (checkpoint) if there are
many slots but it is probably a one-time work.

I couldn't think of better ideas but another possibility is to mark
the slot as dirty when we update the confirm_flush LSN (see
LogicalConfirmReceivedLocation()). However, that would be a bigger
overhead in the running server as it could be a frequent operation and
could lead to more writes.

Yeah I didn't find any better option either at that time. I still think
that forcing persistence on shutdown is the best compromise. If we tried to
always mark the slot as dirty, we would be sure to add regular overhead but
we would probably end up persisting the slot on disk on shutdown anyway
most of the time, so I don't think it would be a good compromise.

My biggest concern was that some switchover scenario might be a bit slower
in some cases, but if that really is a problem it's hard to imagine what
workload would be possible without having to persist them anyway due to
continuous activity needing to be sent just before the shutdown.

Show quoted text
#3Amit Kapila
amit.kapila16@gmail.com
In reply to: Julien Rouhaud (#2)
Re: persist logical slots to disk during shutdown checkpoint

On Sat, Aug 19, 2023 at 12:46 PM Julien Rouhaud <rjuju123@gmail.com> wrote:

On Sat, 19 Aug 2023, 14:16 Amit Kapila, <amit.kapila16@gmail.com> wrote:

The idea discussed in the thread [1] is to always persist logical
slots to disk during the shutdown checkpoint. I have extracted the
patch to achieve the same from that thread and attached it here. This
could lead to some overhead during shutdown (checkpoint) if there are
many slots but it is probably a one-time work.

I couldn't think of better ideas but another possibility is to mark
the slot as dirty when we update the confirm_flush LSN (see
LogicalConfirmReceivedLocation()). However, that would be a bigger
overhead in the running server as it could be a frequent operation and
could lead to more writes.

Yeah I didn't find any better option either at that time. I still think that forcing persistence on shutdown is the best compromise. If we tried to always mark the slot as dirty, we would be sure to add regular overhead but we would probably end up persisting the slot on disk on shutdown anyway most of the time, so I don't think it would be a good compromise.

The other possibility is that we introduce yet another dirty flag for
slots, say dirty_for_shutdown_checkpoint which will be set when we
update confirmed_flush LSN. The flag will be cleared each time we
persist the slot but we won't persist if only this flag is set. We can
then use it during the shutdown checkpoint to decide whether to
persist the slot.

--
With Regards,
Amit Kapila.

#4Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Amit Kapila (#3)
Re: persist logical slots to disk during shutdown checkpoint

On Sun, Aug 20, 2023 at 8:40 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Sat, Aug 19, 2023 at 12:46 PM Julien Rouhaud <rjuju123@gmail.com> wrote:

On Sat, 19 Aug 2023, 14:16 Amit Kapila, <amit.kapila16@gmail.com> wrote:

The idea discussed in the thread [1] is to always persist logical
slots to disk during the shutdown checkpoint. I have extracted the
patch to achieve the same from that thread and attached it here. This
could lead to some overhead during shutdown (checkpoint) if there are
many slots but it is probably a one-time work.

I couldn't think of better ideas but another possibility is to mark
the slot as dirty when we update the confirm_flush LSN (see
LogicalConfirmReceivedLocation()). However, that would be a bigger
overhead in the running server as it could be a frequent operation and
could lead to more writes.

Yeah I didn't find any better option either at that time. I still think that forcing persistence on shutdown is the best compromise. If we tried to always mark the slot as dirty, we would be sure to add regular overhead but we would probably end up persisting the slot on disk on shutdown anyway most of the time, so I don't think it would be a good compromise.

The other possibility is that we introduce yet another dirty flag for
slots, say dirty_for_shutdown_checkpoint which will be set when we
update confirmed_flush LSN. The flag will be cleared each time we
persist the slot but we won't persist if only this flag is set. We can
then use it during the shutdown checkpoint to decide whether to
persist the slot.

There are already two booleans controlling dirty-ness of slot, dirty
and just_dirty. Adding third will created more confusion.

Another idea is to record the confirm_flush_lsn at the time of
persisting the slot. We can use it in two different ways 1. to mark a
slot dirty and persist if the last confirm_flush_lsn when slot was
persisted was too far from the current confirm_flush_lsn of the slot.
2. at shutdown checkpoint, persist all the slots which have these two
confirm_flush_lsns different.

--
Best Wishes,
Ashutosh Bapat

#5Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Bapat (#4)
Re: persist logical slots to disk during shutdown checkpoint

On Mon, Aug 21, 2023 at 6:36 PM Ashutosh Bapat
<ashutosh.bapat.oss@gmail.com> wrote:

On Sun, Aug 20, 2023 at 8:40 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

The other possibility is that we introduce yet another dirty flag for
slots, say dirty_for_shutdown_checkpoint which will be set when we
update confirmed_flush LSN. The flag will be cleared each time we
persist the slot but we won't persist if only this flag is set. We can
then use it during the shutdown checkpoint to decide whether to
persist the slot.

There are already two booleans controlling dirty-ness of slot, dirty
and just_dirty. Adding third will created more confusion.

Another idea is to record the confirm_flush_lsn at the time of
persisting the slot. We can use it in two different ways 1. to mark a
slot dirty and persist if the last confirm_flush_lsn when slot was
persisted was too far from the current confirm_flush_lsn of the slot.
2. at shutdown checkpoint, persist all the slots which have these two
confirm_flush_lsns different.

I think using it in the second (2) way sounds advantageous as compared
to storing another dirty flag because this requires us to update
last_persisted_confirm_flush_lsn only while writing the slot info.
OTOH, having a flag dirty_for_shutdown_checkpoint will require us to
update it each time we update confirm_flush_lsn under spinlock at
multiple places. But, I don't see the need of doing what you proposed
in (1) as the use case for it is very minor, basically this may
sometimes help us to avoid decoding after crash recovery.

--
With Regards,
Amit Kapila.

#6Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Amit Kapila (#5)
Re: persist logical slots to disk during shutdown checkpoint

On Tue, Aug 22, 2023 at 9:48 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

Another idea is to record the confirm_flush_lsn at the time of
persisting the slot. We can use it in two different ways 1. to mark a
slot dirty and persist if the last confirm_flush_lsn when slot was
persisted was too far from the current confirm_flush_lsn of the slot.
2. at shutdown checkpoint, persist all the slots which have these two
confirm_flush_lsns different.

I think using it in the second (2) way sounds advantageous as compared
to storing another dirty flag because this requires us to update
last_persisted_confirm_flush_lsn only while writing the slot info.
OTOH, having a flag dirty_for_shutdown_checkpoint will require us to
update it each time we update confirm_flush_lsn under spinlock at
multiple places. But, I don't see the need of doing what you proposed
in (1) as the use case for it is very minor, basically this may
sometimes help us to avoid decoding after crash recovery.

Once we have last_persisted_confirm_flush_lsn, (1) is just an
optimization on top of that. With that we take the opportunity to
persist confirmed_flush_lsn which is much farther than the current
persisted value and thus improving chances of updating restart_lsn and
catalog_xmin faster after a WAL sender restart. We need to keep that
in mind when implementing (2). The problem is if we don't implement
(1) right now, we might just forget to do that small incremental
change in future. My preference is 1. Do both (1) and (2) together 2.
Do (2) first and then (1) as a separate commit. 3. Just implement (2)
if we don't have time at all for first two options.

--
Best Wishes,
Ashutosh Bapat

#7Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Bapat (#6)
Re: persist logical slots to disk during shutdown checkpoint

On Tue, Aug 22, 2023 at 2:56 PM Ashutosh Bapat
<ashutosh.bapat.oss@gmail.com> wrote:

On Tue, Aug 22, 2023 at 9:48 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

Another idea is to record the confirm_flush_lsn at the time of
persisting the slot. We can use it in two different ways 1. to mark a
slot dirty and persist if the last confirm_flush_lsn when slot was
persisted was too far from the current confirm_flush_lsn of the slot.
2. at shutdown checkpoint, persist all the slots which have these two
confirm_flush_lsns different.

I think using it in the second (2) way sounds advantageous as compared
to storing another dirty flag because this requires us to update
last_persisted_confirm_flush_lsn only while writing the slot info.
OTOH, having a flag dirty_for_shutdown_checkpoint will require us to
update it each time we update confirm_flush_lsn under spinlock at
multiple places. But, I don't see the need of doing what you proposed
in (1) as the use case for it is very minor, basically this may
sometimes help us to avoid decoding after crash recovery.

Once we have last_persisted_confirm_flush_lsn, (1) is just an
optimization on top of that. With that we take the opportunity to
persist confirmed_flush_lsn which is much farther than the current
persisted value and thus improving chances of updating restart_lsn and
catalog_xmin faster after a WAL sender restart. We need to keep that
in mind when implementing (2). The problem is if we don't implement
(1) right now, we might just forget to do that small incremental
change in future. My preference is 1. Do both (1) and (2) together 2.
Do (2) first and then (1) as a separate commit. 3. Just implement (2)
if we don't have time at all for first two options.

I prefer one of (2) or (3). Anyway, it is better to do that
optimization (persist confirm_flush_lsn at a regular interval) as a
separate patch as we need to test and prove its value separately.

--
With Regards,
Amit Kapila.

#8Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Amit Kapila (#7)
Re: persist logical slots to disk during shutdown checkpoint

On Tue, Aug 22, 2023 at 3:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Once we have last_persisted_confirm_flush_lsn, (1) is just an
optimization on top of that. With that we take the opportunity to
persist confirmed_flush_lsn which is much farther than the current
persisted value and thus improving chances of updating restart_lsn and
catalog_xmin faster after a WAL sender restart. We need to keep that
in mind when implementing (2). The problem is if we don't implement
(1) right now, we might just forget to do that small incremental
change in future. My preference is 1. Do both (1) and (2) together 2.
Do (2) first and then (1) as a separate commit. 3. Just implement (2)
if we don't have time at all for first two options.

I prefer one of (2) or (3). Anyway, it is better to do that
optimization (persist confirm_flush_lsn at a regular interval) as a
separate patch as we need to test and prove its value separately.

Fine with me.

--
Best Wishes,
Ashutosh Bapat

#9Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Amit Kapila (#1)
RE: persist logical slots to disk during shutdown checkpoint

Dear hackers,

Thanks for forking the thread! I think we would choose another design, but I wanted
to post the updated version once with the current approach. All comments came
from the parent thread [1]/messages/by-id/CAHut+Ptb=ZYTM_awoLy3sJ5m9Oxe=JYn6Gve5rSW9cUdThpsVA@mail.gmail.com.

1. GENERAL -- git apply

The patch fails to apply cleanly. There are whitespace warnings.

[postgres(at)CentOS7-x64 oss_postgres_misc]$ git apply
../patches_misc/v23-0001-Always-persist-to-disk-logical-slots-during-a-sh.patch
../patches_misc/v23-0001-Always-persist-to-disk-logical-slots-during-a-sh.patch:102:
trailing whitespace.
# SHUTDOWN_CHECKPOINT record.
warning: 1 line adds whitespace errors.

There was an extra blank, removed.

2. GENERAL -- which patch is the real one and which is the copy?

IMO this patch has become muddled.

Amit recently created a new thread [1] "persist logical slots to disk
during shutdown checkpoint", which I thought was dedicated to the
discussion/implementation of this 0001 patch. Therefore, I expected any
0001 patch changes to would be made only in that new thread from now on,
(and maybe you would mirror them here in this thread).

But now I see there are v23-0001 patch changes here again. So, now the same
patch is in 2 places and they are different. It is no longer clear to me
which 0001 ("Always persist...") patch is the definitive one, and which one
is the copy.

Attached one in another thread is just copy to make cfbot happy, it could be
ignored.

contrib/test_decoding/t/002_always_persist.pl

3.
+
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+# Test logical replication slots are always persist to disk during a
shutdown
+# checkpoint.
+
+use strict;
+use warnings;
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;

/always persist/always persisted/

Fixed.

4.
+
+# Test set-up
my $node = PostgreSQL::Test::Cluster->new('test');
$node->init(allows_streaming => 'logical');
$node->append_conf('postgresql.conf', q{
autovacuum = off
checkpoint_timeout = 1h
});

$node->start;

# Create table
$node->safe_psql('postgres', "CREATE TABLE test (id int)");

Maybe it is better to call the table something different instead of the
same name as the cluster. e.g. 'test_tbl' would be better.

Changed to 'test_tbl'.

5.
+# Shutdown the node once to do shutdown checkpoint
$node->stop();

SUGGESTION
# Stop the node to cause a shutdown checkpoint

Fixed.

6.
+# Fetch checkPoint from the control file itself
my ($stdout, $stderr) = run_command([ 'pg_controldata', $node->data_dir ]);
my @control_data = split("\n", $stdout);
my $latest_checkpoint = undef;
foreach (@control_data)
{
if ($_ =~ /^Latest checkpoint location:\s*(.*)$/mg)
{
$latest_checkpoint = $1;
last;
}
}
die "No checkPoint in control file found\n"
unless defined($latest_checkpoint);

6a.
/checkPoint/checkpoint/ (2x)

6b.
+die "No checkPoint in control file found\n"

SUGGESTION
"No checkpoint found in control file\n"

Hmm, these notations were followed the test recovery/t/016_min_consistency.pl,
it uses the word "minRecoveryPoint". So I preferred current one.

[1]: /messages/by-id/CAHut+Ptb=ZYTM_awoLy3sJ5m9Oxe=JYn6Gve5rSW9cUdThpsVA@mail.gmail.com

Best Regards,
Hayato Kuroda
FUJITSU LIMITED

Attachments:

v2-0001-Always-persist-to-disk-logical-slots-during-a-shu.patchapplication/octet-stream; name=v2-0001-Always-persist-to-disk-logical-slots-during-a-shu.patchDownload+92-13
#10vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#7)
Re: persist logical slots to disk during shutdown checkpoint

On Tue, 22 Aug 2023 at 15:42, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Aug 22, 2023 at 2:56 PM Ashutosh Bapat
<ashutosh.bapat.oss@gmail.com> wrote:

On Tue, Aug 22, 2023 at 9:48 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

Another idea is to record the confirm_flush_lsn at the time of
persisting the slot. We can use it in two different ways 1. to mark a
slot dirty and persist if the last confirm_flush_lsn when slot was
persisted was too far from the current confirm_flush_lsn of the slot.
2. at shutdown checkpoint, persist all the slots which have these two
confirm_flush_lsns different.

I think using it in the second (2) way sounds advantageous as compared
to storing another dirty flag because this requires us to update
last_persisted_confirm_flush_lsn only while writing the slot info.
OTOH, having a flag dirty_for_shutdown_checkpoint will require us to
update it each time we update confirm_flush_lsn under spinlock at
multiple places. But, I don't see the need of doing what you proposed
in (1) as the use case for it is very minor, basically this may
sometimes help us to avoid decoding after crash recovery.

Once we have last_persisted_confirm_flush_lsn, (1) is just an
optimization on top of that. With that we take the opportunity to
persist confirmed_flush_lsn which is much farther than the current
persisted value and thus improving chances of updating restart_lsn and
catalog_xmin faster after a WAL sender restart. We need to keep that
in mind when implementing (2). The problem is if we don't implement
(1) right now, we might just forget to do that small incremental
change in future. My preference is 1. Do both (1) and (2) together 2.
Do (2) first and then (1) as a separate commit. 3. Just implement (2)
if we don't have time at all for first two options.

I prefer one of (2) or (3). Anyway, it is better to do that
optimization (persist confirm_flush_lsn at a regular interval) as a
separate patch as we need to test and prove its value separately.

Here is a patch to persist to disk logical slots during a shutdown
checkpoint if the updated confirmed_flush_lsn has not yet been
persisted.

Regards,
Vignesh

Attachments:

v3-0001-Persist-to-disk-logical-slots-during-a-shutdown-c.patchtext/x-patch; charset=US-ASCII; name=v3-0001-Persist-to-disk-logical-slots-during-a-shutdown-c.patchDownload+103-13
#11Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: vignesh C (#10)
RE: persist logical slots to disk during shutdown checkpoint

Dear Vignesh,

Here is a patch to persist to disk logical slots during a shutdown
checkpoint if the updated confirmed_flush_lsn has not yet been
persisted.

Thanks for making the patch with different approach! Here are comments.

01. RestoreSlotFromDisk

```
slot->candidate_xmin_lsn = InvalidXLogRecPtr;
slot->candidate_restart_lsn = InvalidXLogRecPtr;
slot->candidate_restart_valid = InvalidXLogRecPtr;
+ slot->last_persisted_confirmed_flush = InvalidXLogRecPtr;
```

last_persisted_confirmed_flush was set to InvalidXLogRecPtr, but isn't it better
to use cp.slotdata. confirmed_flush? Assuming that the server is shut down immediately,
your patch forces to save.

02. t/002_always_persist.pl

The original author of the patch is me, but I found that the test could pass
without your patch. This is because pg_logical_slot_get_changes()->
pg_logical_slot_get_changes_guts(confirm = true) always mark the slot as dirty.
IIUC we must use the logical replication system to verify the persistence.
Attached test can pass only when patch is applied.

Best Regards,
Hayato Kuroda
FUJITSU LIMITED

Attachments:

another_test.patchapplication/octet-stream; name=another_test.patchDownload+91-0
#12vignesh C
vignesh21@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#11)
Re: persist logical slots to disk during shutdown checkpoint

On Wed, 23 Aug 2023 at 14:21, Hayato Kuroda (Fujitsu)
<kuroda.hayato@fujitsu.com> wrote:

Dear Vignesh,

Here is a patch to persist to disk logical slots during a shutdown
checkpoint if the updated confirmed_flush_lsn has not yet been
persisted.

Thanks for making the patch with different approach! Here are comments.

01. RestoreSlotFromDisk

```
slot->candidate_xmin_lsn = InvalidXLogRecPtr;
slot->candidate_restart_lsn = InvalidXLogRecPtr;
slot->candidate_restart_valid = InvalidXLogRecPtr;
+ slot->last_persisted_confirmed_flush = InvalidXLogRecPtr;
```

last_persisted_confirmed_flush was set to InvalidXLogRecPtr, but isn't it better
to use cp.slotdata. confirmed_flush? Assuming that the server is shut down immediately,
your patch forces to save.

02. t/002_always_persist.pl

The original author of the patch is me, but I found that the test could pass
without your patch. This is because pg_logical_slot_get_changes()->
pg_logical_slot_get_changes_guts(confirm = true) always mark the slot as dirty.
IIUC we must use the logical replication system to verify the persistence.
Attached test can pass only when patch is applied.

Here are few other comments that I noticed:

1) I too noticed that the test passes both with and without patch:
diff --git a/contrib/test_decoding/meson.build
b/contrib/test_decoding/meson.build
index 7b05cc25a3..12afb9ea8c 100644
--- a/contrib/test_decoding/meson.build
+++ b/contrib/test_decoding/meson.build
@@ -72,6 +72,7 @@ tests += {
   'tap': {
     'tests': [
       't/001_repl_stats.pl',
+      't/002_always_persist.pl',
2) change checkPoint to checkpoint:
2.a) checkPoint should be checkpoint to maintain consistency across the file:
+# Shutdown the node once to do shutdown checkpoint
+$node->stop();
+
+# Fetch checkPoint from the control file itself
+my ($stdout, $stderr) = run_command([ 'pg_controldata', $node->data_dir ]);
+my @control_data = split("\n", $stdout);
+my $latest_checkpoint = undef;
2.b) similarly here:
+die "No checkPoint in control file found\n"
+  unless defined($latest_checkpoint);
2.c) similarly here too:
+# Compare confirmed_flush_lsn and checkPoint
+ok($confirmed_flush eq $latest_checkpoint,
+       "Check confirmed_flush is same as latest checkpoint location");

3) change checkpoint to "Latest checkpoint location":
3.a) We should change "No checkPoint in control file found\n" to:
"Latest checkpoint location not found in control file\n" as there are
many checkpoint entries in control data

+foreach (@control_data)
+{
+       if ($_ =~ /^Latest checkpoint location:\s*(.*)$/mg)
+       {
+               $latest_checkpoint = $1;
+               last;
+       }
+}
+die "No checkPoint in control file found\n"
+  unless defined($latest_checkpoint);

3.b) We should change "Fetch checkPoint from the control file itself" to:
"Fetch Latest checkpoint location from the control file"

+# Fetch checkPoint from the control file itself
+my ($stdout, $stderr) = run_command([ 'pg_controldata', $node->data_dir ]);
+my @control_data = split("\n", $stdout);
+my $latest_checkpoint = undef;
+foreach (@control_data)
+{

Regards,
Vignesh

#13vignesh C
vignesh21@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#11)
Re: persist logical slots to disk during shutdown checkpoint

On Wed, 23 Aug 2023 at 14:21, Hayato Kuroda (Fujitsu)
<kuroda.hayato@fujitsu.com> wrote:

Dear Vignesh,

Here is a patch to persist to disk logical slots during a shutdown
checkpoint if the updated confirmed_flush_lsn has not yet been
persisted.

Thanks for making the patch with different approach! Here are comments.

01. RestoreSlotFromDisk

```
slot->candidate_xmin_lsn = InvalidXLogRecPtr;
slot->candidate_restart_lsn = InvalidXLogRecPtr;
slot->candidate_restart_valid = InvalidXLogRecPtr;
+ slot->last_persisted_confirmed_flush = InvalidXLogRecPtr;
```

last_persisted_confirmed_flush was set to InvalidXLogRecPtr, but isn't it better
to use cp.slotdata. confirmed_flush? Assuming that the server is shut down immediately,
your patch forces to save.

Modified

02. t/002_always_persist.pl

The original author of the patch is me, but I found that the test could pass
without your patch. This is because pg_logical_slot_get_changes()->
pg_logical_slot_get_changes_guts(confirm = true) always mark the slot as dirty.
IIUC we must use the logical replication system to verify the persistence.
Attached test can pass only when patch is applied.

Update the test based on your another_test with slight modifications.

Attached v4 version patch has the changes for the same.

Regards,
Vignesh

Attachments:

v4-0001-Persist-to-disk-logical-slots-during-a-shutdown-c.patchtext/x-patch; charset=US-ASCII; name=v4-0001-Persist-to-disk-logical-slots-during-a-shutdown-c.patchDownload+135-13
#14vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#1)
Re: persist logical slots to disk during shutdown checkpoint

On Sat, 19 Aug 2023 at 11:53, Amit Kapila <amit.kapila16@gmail.com> wrote:

It's entirely possible for a logical slot to have a confirmed_flush
LSN higher than the last value saved on disk while not being marked as
dirty. It's currently not a problem to lose that value during a clean
shutdown / restart cycle but to support the upgrade of logical slots
[1] (see latest patch at [2]), we seem to rely on that value being
properly persisted to disk. During the upgrade, we need to verify that
all the data prior to shudown_checkpoint for the logical slots has
been consumed, otherwise, the downstream may miss some data. Now, to
ensure the same, we are planning to compare the confirm_flush LSN
location with the latest shudown_checkpoint location which means that
the confirm_flush LSN should be updated after restart.

I think this is inefficient even without an upgrade because, after the
restart, this may lead to decoding some data again. Say, we process
some transactions for which we didn't send anything downstream (the
changes got filtered) but the confirm_flush LSN is updated due to
keepalives. As we don't flush the latest value of confirm_flush LSN,
it may lead to processing the same changes again.

I was able to test and verify that we were not processing the same
changes again.
Note: The 0001-Add-logs-to-skip-transaction-filter-insert-operation.patch
has logs to print if a decode transaction is skipped and also a log to
mention if any operation is filtered.
The test.sh script has the steps for a) setting up logical replication
for a table b) perform insert on table that need to be published (this
will be replicated to the subscriber) c) perform insert on a table
that will not be published (this insert will be filtered, it will not
be replicated) d) sleep for 5 seconds e) stop the server f) start the
server
I used the following steps, do the following in HEAD:
a) Apply 0001-Add-logs-to-skip-transaction-filter-insert-operation.patch
patch in Head and build the binaries b) execute test.sh c) view N1.log
file to see that the insert operations were filtered again by seeing
the following logs:
LOG: Filter insert for table tbl2
...
===restart===
...
LOG: Skipping transaction 0/156AD10 as start decode at is greater 0/156AE40
...
LOG: Filter insert for table tbl2

We can see that the insert operations on tbl2 which was filtered
before server was stopped is again filtered after restart too in HEAD.

Lets see that the same changes were not processed again with patch:
a) Apply v4-0001-Persist-to-disk-logical-slots-during-a-shutdown-c.patch
from [1]/messages/by-id/CALDaNm0VrAt24e2FxbOX6eJQ-G_tZ0gVpsFBjzQM99NxG0hZfg@mail.gmail.com also apply
0001-Add-logs-to-skip-transaction-filter-insert-operation.patch patch
and build the binaries b) execute test.sh c) view N1.log file to see
that the insert operations were skipped after restart of server by
seeing the following logs:
LOG: Filter insert for table tbl2
...
===restart===
...
Skipping transaction 0/156AD10 as start decode at is greater 0/156AFB0
...
Skipping transaction 0/156AE80 as start decode at is greater 0/156AFB0

We can see that the insert operations on tbl2 are not processed again
after restart with the patch.

[1]: /messages/by-id/CALDaNm0VrAt24e2FxbOX6eJQ-G_tZ0gVpsFBjzQM99NxG0hZfg@mail.gmail.com

Regards,
Vignesh

Attachments:

test.shtext/x-sh; charset=US-ASCII; name=test.shDownload
0001-Add-logs-to-skip-transaction-filter-insert-operation.patchapplication/octet-stream; name=0001-Add-logs-to-skip-transaction-filter-insert-operation.patchDownload+6-1
#15vignesh C
vignesh21@gmail.com
In reply to: vignesh C (#14)
Re: persist logical slots to disk during shutdown checkpoint

On Fri, 25 Aug 2023 at 17:40, vignesh C <vignesh21@gmail.com> wrote:

On Sat, 19 Aug 2023 at 11:53, Amit Kapila <amit.kapila16@gmail.com> wrote:

It's entirely possible for a logical slot to have a confirmed_flush
LSN higher than the last value saved on disk while not being marked as
dirty. It's currently not a problem to lose that value during a clean
shutdown / restart cycle but to support the upgrade of logical slots
[1] (see latest patch at [2]), we seem to rely on that value being
properly persisted to disk. During the upgrade, we need to verify that
all the data prior to shudown_checkpoint for the logical slots has
been consumed, otherwise, the downstream may miss some data. Now, to
ensure the same, we are planning to compare the confirm_flush LSN
location with the latest shudown_checkpoint location which means that
the confirm_flush LSN should be updated after restart.

I think this is inefficient even without an upgrade because, after the
restart, this may lead to decoding some data again. Say, we process
some transactions for which we didn't send anything downstream (the
changes got filtered) but the confirm_flush LSN is updated due to
keepalives. As we don't flush the latest value of confirm_flush LSN,
it may lead to processing the same changes again.

I was able to test and verify that we were not processing the same
changes again.
Note: The 0001-Add-logs-to-skip-transaction-filter-insert-operation.patch
has logs to print if a decode transaction is skipped and also a log to
mention if any operation is filtered.
The test.sh script has the steps for a) setting up logical replication
for a table b) perform insert on table that need to be published (this
will be replicated to the subscriber) c) perform insert on a table
that will not be published (this insert will be filtered, it will not
be replicated) d) sleep for 5 seconds e) stop the server f) start the
server
I used the following steps, do the following in HEAD:
a) Apply 0001-Add-logs-to-skip-transaction-filter-insert-operation.patch
patch in Head and build the binaries b) execute test.sh c) view N1.log
file to see that the insert operations were filtered again by seeing
the following logs:
LOG: Filter insert for table tbl2
...
===restart===
...
LOG: Skipping transaction 0/156AD10 as start decode at is greater 0/156AE40
...
LOG: Filter insert for table tbl2

We can see that the insert operations on tbl2 which was filtered
before server was stopped is again filtered after restart too in HEAD.

Lets see that the same changes were not processed again with patch:
a) Apply v4-0001-Persist-to-disk-logical-slots-during-a-shutdown-c.patch
from [1] also apply
0001-Add-logs-to-skip-transaction-filter-insert-operation.patch patch
and build the binaries b) execute test.sh c) view N1.log file to see
that the insert operations were skipped after restart of server by
seeing the following logs:
LOG: Filter insert for table tbl2
...
===restart===
...
Skipping transaction 0/156AD10 as start decode at is greater 0/156AFB0
...
Skipping transaction 0/156AE80 as start decode at is greater 0/156AFB0

We can see that the insert operations on tbl2 are not processed again
after restart with the patch.

Here is another way to test using pg_replslotdata approach that was
proposed earlier at [1]/messages/by-id/CALj2ACW0rV5gWK8A3m6_X62qH+Vfaq5hznC=i0R5Wojt5+yhyw@mail.gmail.com.
I have rebased this on top of HEAD and the v5 version for the same is attached.

We can use the same test as test.sh shared at [2]/messages/by-id/CALDaNm2BboFuFVYxyzP4wkv7=8+_TwsD+ugyGhtibTSF4m4XRg@mail.gmail.com.
When executed with HEAD, it was noticed that confirmed_flush points to
WAL location before both the transaction:
slot_name slot_type datoid persistency xmin catalog_xmin
restart_lsn confirmed_flush two_phase_at two_phase
plugin
--------- --------- ------ ---------- ----
----------- ----------- ---------------
------------ --------- ------
sub logical 5 persistent 0
735 0/1531E28 0/1531E60 0/0
0 pgoutput

WAL record information generated using pg_walinspect for various
records at and after confirmed_flush WAL 0/1531E60:
row_number | start_lsn | end_lsn | prev_lsn | xid |
resource_manager | record_type | record_length |
main_data_length | fpi_length |
description
|
block_ref
------------+-----------+-----------+-----------+-----+------------------+---------------------+---------------+------------------+------------+-------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------
1 | 0/1531E60 | 0/1531EA0 | 0/1531E28 | 0 | Heap2
| PRUNE | 57 | 9 |
0 | snapshotConflictHorizon: 0, nredirected: 0, ndead: 1, nunused: 0,
redirected: [], dead: [1]/messages/by-id/CALj2ACW0rV5gWK8A3m6_X62qH+Vfaq5hznC=i0R5Wojt5+yhyw@mail.gmail.com, unused: []
|
blkref #0: rel 1663/5/1255 fork main blk 58
2 | 0/1531EA0 | 0/1531EE0 | 0/1531E60 | 735 | Heap
| INSERT+INIT | 59 | 3 |
0 | off: 1, flags: 0x08

|
blkref #0: rel 1663/5/16384 fork main blk 0
3 | 0/1531EE0 | 0/1531F20 | 0/1531EA0 | 735 | Heap
| INSERT | 59 | 3 |
0 | off: 2, flags: 0x08

|
blkref #0: rel 1663/5/16384 fork main blk 0
4 | 0/1531F20 | 0/1531F60 | 0/1531EE0 | 735 | Heap
| INSERT | 59 | 3 |
0 | off: 3, flags: 0x08

|
blkref #0: rel 1663/5/16384 fork main blk 0
5 | 0/1531F60 | 0/1531FA0 | 0/1531F20 | 735 | Heap
| INSERT | 59 | 3 |
0 | off: 4, flags: 0x08

|
blkref #0: rel 1663/5/16384 fork main blk 0
6 | 0/1531FA0 | 0/1531FE0 | 0/1531F60 | 735 | Heap
| INSERT | 59 | 3 |
0 | off: 5, flags: 0x08

|
blkref #0: rel 1663/5/16384 fork main blk 0
7 | 0/1531FE0 | 0/1532028 | 0/1531FA0 | 735 | Transaction
| COMMIT | 46 | 20 |
0 | 2023-08-27 23:22:17.161215+05:30

|
8 | 0/1532028 | 0/1532068 | 0/1531FE0 | 736 | Heap
| INSERT+INIT | 59 | 3 |
0 | off: 1, flags: 0x08

|
blkref #0: rel 1663/5/16387 fork main blk 0
9 | 0/1532068 | 0/15320A8 | 0/1532028 | 736 | Heap
| INSERT | 59 | 3 |
0 | off: 2, flags: 0x08

|
blkref #0: rel 1663/5/16387 fork main blk 0
10 | 0/15320A8 | 0/15320E8 | 0/1532068 | 736 | Heap
| INSERT | 59 | 3 |
0 | off: 3, flags: 0x08

|
blkref #0: rel 1663/5/16387 fork main blk 0
11 | 0/15320E8 | 0/1532128 | 0/15320A8 | 736 | Heap
| INSERT | 59 | 3 |
0 | off: 4, flags: 0x08

|
blkref #0: rel 1663/5/16387 fork main blk 0
12 | 0/1532128 | 0/1532168 | 0/15320E8 | 736 | Heap
| INSERT | 59 | 3 |
0 | off: 5, flags: 0x08

|
blkref #0: rel 1663/5/16387 fork main blk 0
13 | 0/1532168 | 0/1532198 | 0/1532128 | 736 | Transaction
| COMMIT | 46 | 20 |
0 | 2023-08-27 23:22:17.174756+05:30

|
14 | 0/1532198 | 0/1532210 | 0/1532168 | 0 | XLOG
| CHECKPOINT_SHUTDOWN | 114 | 88 |
0 | redo 0/1532198; tli 1; prev tli 1; fpw true; xid 0:737; oid 16399;
multi 1; offset 0; oldest xid 723 in DB 1; oldest multi 1 in DB 1;
oldest/newest commit timestamp xid: 0/0; oldest running xid 0;
shutdown |

Whereas the same test executed with the patch applied shows that
confirmed_flush points to CHECKPOINT_SHUTDOWN record:
slot_name slot_type datoid persistency xmin catalog_xmin
restart_lsn confirmed_flush two_phase_at two_phase
plugin
--------- --------- ------ ----------- ---
----------- ----------- ---------------
----------- --------- ------
sub logical 5 persistent 0 735
0/1531E28 0/1532198 0/0 0
pgoutput

WAL record information generated using pg_walinspect for various
records at and after confirmed_flush WAL 0/1532198:
row_number | start_lsn | end_lsn | prev_lsn | xid |
resource_manager | record_type | record_length |
main_data_length | fpi_length |
description
|
block_ref
------------+-----------+-----------+-----------+-----+------------------+---------------------+---------------+------------------+------------+-------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------+-----------
1 | 0/1532198 | 0/1532210 | 0/1532168 | 0 | XLOG
| CHECKPOINT_SHUTDOWN | 114 | 88 |
0 | redo 0/1532198; tli 1; prev tli 1; fpw true; xid 0:737; oid 16399;
multi 1; offset 0; oldest xid 723 in DB 1; oldest multi 1 in DB 1;
oldest/newest commit timestamp xid: 0/0; oldest running xid 0;
shutdown |
(1 row)

[1]: /messages/by-id/CALj2ACW0rV5gWK8A3m6_X62qH+Vfaq5hznC=i0R5Wojt5+yhyw@mail.gmail.com
[2]: /messages/by-id/CALDaNm2BboFuFVYxyzP4wkv7=8+_TwsD+ugyGhtibTSF4m4XRg@mail.gmail.com

Regards,
Vignesh

Attachments:

v5-0001-pg_replslotdata.patchtext/x-patch; charset=US-ASCII; name=v5-0001-pg_replslotdata.patchDownload+574-136
#16Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: vignesh C (#13)
RE: persist logical slots to disk during shutdown checkpoint

Dear hackers,

I also tested for logical slots on the physical standby. PSA the script.
confirmed_flush_lsn for such slots were successfully persisted.

# Topology

In this test nodes are connected each other.

node1 --(physical replication)-->node2--(logical replication)-->node3

# Test method

An attached script did following steps

1. constructed above configurations
2. Inserted data on node1
3. read confirmed_flush_lsn on node2 (a)
4. restarted node2
5. read confirmed_flush_lsn again on node2 (b)
6. compare (a) and (b)

# result

Before patching, (a) and (b) were different value, which meant that logical
slots on physical standby were not saved at shutdown.

```
slot_name | confirmed_flush_lsn
-----------+---------------------
sub | 0/30003E8
(1 row)

waiting for server to shut down.... done
server stopped
waiting for server to start.... done
server started
slot_name | confirmed_flush_lsn
-----------+---------------------
sub | 0/30000D8
(1 row)
```

After patching, (a) and (b) became the same value. The v4 patch worked well even
if the node is physical standby.

```
slot_name | confirmed_flush_lsn
-----------+---------------------
sub | 0/30003E8
(1 row)

waiting for server to shut down.... done
server stopped
waiting for server to start.... done
server started
slot_name | confirmed_flush_lsn
-----------+---------------------
sub | 0/30003E8
(1 row)
```

Best Regards,
Hayato Kuroda
FUJITSU LIMITED

Attachments:

test_for_physical_standby.shapplication/octet-stream; name=test_for_physical_standby.shDownload
#17Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#13)
Re: persist logical slots to disk during shutdown checkpoint

On Thu, Aug 24, 2023 at 11:44 AM vignesh C <vignesh21@gmail.com> wrote:

The patch looks mostly good to me. I have made minor changes which are
as follows: (a) removed the autovacuum =off and
wal_receiver_status_interval = 0 setting as those doesn't seem to be
required for the test; (b) changed a few comments and variable names
in the code and test;

Shall we change the test file name from always_persist to
save_logical_slots_shutdown and move to recovery/t/ as this test is
about verification after the restart of the server?

--
With Regards,
Amit Kapila.

Attachments:

v5-0001-Persist-to-disk-logical-slots-during-a-shutdown-c.patchapplication/octet-stream; name=v5-0001-Persist-to-disk-logical-slots-during-a-shutdown-c.patchDownload+132-14
#18vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#17)
Re: persist logical slots to disk during shutdown checkpoint

On Mon, 28 Aug 2023 at 18:56, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Aug 24, 2023 at 11:44 AM vignesh C <vignesh21@gmail.com> wrote:

The patch looks mostly good to me. I have made minor changes which are
as follows: (a) removed the autovacuum =off and
wal_receiver_status_interval = 0 setting as those doesn't seem to be
required for the test; (b) changed a few comments and variable names
in the code and test;

Shall we change the test file name from always_persist to
save_logical_slots_shutdown and move to recovery/t/ as this test is
about verification after the restart of the server?

That makes sense. The attached v6 version has the changes for the
same, apart from this I have also fixed a) pgindent issues b) perltidy
issues c) one variable change (flush_lsn_changed to
confirmed_flush_has_changed) d) corrected few comments in the test
file. Thanks to Peter for providing few offline comments.

Regards,
Vignesh

Attachments:

v6-0001-Persist-logical-slots-to-disk-during-a-shutdown-c.patchtext/x-patch; charset=US-ASCII; name=v6-0001-Persist-logical-slots-to-disk-during-a-shutdown-c.patchDownload+133-14
#19Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#18)
Re: persist logical slots to disk during shutdown checkpoint

On Tue, Aug 29, 2023 at 10:16 AM vignesh C <vignesh21@gmail.com> wrote:

That makes sense. The attached v6 version has the changes for the
same, apart from this I have also fixed a) pgindent issues b) perltidy
issues c) one variable change (flush_lsn_changed to
confirmed_flush_has_changed) d) corrected few comments in the test
file. Thanks to Peter for providing few offline comments.

The latest version looks good to me. Julien, Ashutosh, and others,
unless you have more comments or suggestions, I would like to push
this in a day or two.

--
With Regards,
Amit Kapila.

#20Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Amit Kapila (#19)
Re: persist logical slots to disk during shutdown checkpoint

On Tue, Aug 29, 2023 at 2:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Aug 29, 2023 at 10:16 AM vignesh C <vignesh21@gmail.com> wrote:

That makes sense. The attached v6 version has the changes for the
same, apart from this I have also fixed a) pgindent issues b) perltidy
issues c) one variable change (flush_lsn_changed to
confirmed_flush_has_changed) d) corrected few comments in the test
file. Thanks to Peter for providing few offline comments.

The latest version looks good to me. Julien, Ashutosh, and others,
unless you have more comments or suggestions, I would like to push
this in a day or two.

I am looking at it. If you can wait till the end of the week, that
will be great.

--
Best Wishes,
Ashutosh Bapat

#21Julien Rouhaud
rjuju123@gmail.com
In reply to: Amit Kapila (#19)
#22Amit Kapila
amit.kapila16@gmail.com
In reply to: Julien Rouhaud (#21)
#23Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Ashutosh Bapat (#20)
#24Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Bapat (#23)
#25Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Amit Kapila (#24)
#26Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Bapat (#25)
#27Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Amit Kapila (#26)
#28Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Bapat (#27)
#29Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Bapat (#27)
#30vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#29)
#31Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Amit Kapila (#28)
#32Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Bapat (#31)
#33Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#30)
#34vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#33)
#35Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: vignesh C (#34)
#36Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#35)
#37Dilip Kumar
dilipbalaut@gmail.com
In reply to: vignesh C (#30)
#38Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#37)
#39Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#38)
#40vignesh C
vignesh21@gmail.com
In reply to: Dilip Kumar (#37)
#41Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#36)
#42Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Dilip Kumar (#41)
#43Dilip Kumar
dilipbalaut@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#42)
#44Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#43)
#45Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#44)
#46Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#45)
#47Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#46)
#48vignesh C
vignesh21@gmail.com
In reply to: Dilip Kumar (#47)
#49Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#48)
#50Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#49)
#51Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#50)
#52Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Amit Kapila (#51)
#53Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Bapat (#52)
#54Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Amit Kapila (#53)
#55Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Bapat (#54)
#56Amit Kapila
amit.kapila16@gmail.com
In reply to: Ashutosh Bapat (#52)
#57Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#56)
#58Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#57)
#59Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#58)
#60Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#59)
#61Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#60)
#62Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#61)
#63Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#62)
#64Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#63)
#65Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#64)
#66Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#65)
#67Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#66)
#68Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#67)
#69Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Amit Kapila (#68)
#70Michael Paquier
michael@paquier.xyz
In reply to: Ashutosh Bapat (#69)