Conflict detection for update_deleted in logical replication

Started by Zhijie Hou (Fujitsu)over 1 year ago432 messages
Jump to latest
#1Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com

Hi hackers,

I am starting a new thread to discuss and propose the conflict detection for
update_deleted scenarios during logical replication. This conflict occurs when
the apply worker cannot find the target tuple to be updated, as the tuple might
have been removed by another origin.

---
BACKGROUND
---

Currently, when the apply worker cannot find the target tuple during an update,
an update_missing conflict is logged. However, to facilitate future automatic
conflict resolution, it has been agreed[1]/messages/by-id/CAJpy0uCov4JfZJeOvY0O21_gk9bcgNUDp4jf8+BbMp+EAv8cVQ@mail.gmail.com[2]/messages/by-id/CAA4eK1Lj-PWrP789KnKxZydisHajd38rSihWXO8MVBLDwxG1Kg@mail.gmail.com that we need to detect both
update_missing and update_deleted conflicts. Specifically, we will detect an
update_deleted conflict if any dead tuple matching the old key value of the
update operation is found; otherwise, it will be classified as update_missing.

Detecting both update_deleted and update_missing conflicts is important for
achieving eventual consistency in a bidirectional cluster, because the
resolution for each conflict type can differs. For example, for an
update_missing conflict, a feasible solution might be converting the update to
an insert and applying it. While for an update_deleted conflict, the preferred
approach could be to skip the update or compare the timestamps of the delete
transactions with the remote update transaction's and choose the most recent
one. For additional context, please refer to [3]/messages/by-id/CAJpy0uC6Zs5WwwiyuvG_kEB6Q3wyDWpya7PXm3SMT_YG=XJJ1w@mail.gmail.com, which gives examples about
how these differences could lead to data divergence.

---
ISSUES and SOLUTION
---

To detect update_deleted conflicts, we need to search for dead tuples in the
table. However, dead tuples can be removed by VACUUM at any time. Therefore, to
ensure consistent and accurate conflict detection, tuples deleted by other
origins must not be removed by VACUUM before the conflict detection process. If
the tuples are removed prematurely, it might lead to incorrect conflict
identification and resolution, causing data divergence between nodes.

Here is an example of how VACUUM could affect conflict detection and how to
prevent this issue. Assume we have a bidirectional cluster with two nodes, A
and B.

Node A:
T1: INSERT INTO t (id, value) VALUES (1,1);
T2: DELETE FROM t WHERE id = 1;

Node B:
T3: UPDATE t SET value = 2 WHERE id = 1;

To retain the deleted tuples, the initial idea was that once transaction T2 had
been applied to both nodes, there was no longer a need to preserve the dead
tuple on Node A. However, a scenario arises where transactions T3 and T2 occur
concurrently, with T3 committing slightly earlier than T2. In this case, if
Node B applies T2 and Node A removes the dead tuple (1,1) via VACUUM, and then
Node A applies T3 after the VACUUM operation, it can only result in an
update_missing conflict. Given that the default resolution for update_missing
conflicts is apply_or_skip (e.g. convert update to insert if possible and apply
the insert), Node A will eventually hold a row (1,2) while Node B becomes
empty, causing data inconsistency.

Therefore, the strategy needs to be expanded as follows: Node A cannot remove
the dead tuple until:
(a) The DELETE operation is replayed on all remote nodes, *AND*
(b) The transactions on logical standbys occurring before the replay of Node
A's DELETE are replayed on Node A as well.

---
THE DESIGN
---

To achieve the above, we plan to allow the logical walsender to maintain and
advance the slot.xmin to protect the data in the user table and introduce a new
logical standby feedback message. This message reports the WAL position that
has been replayed on the logical standby *AND* the changes occurring on the
logical standby before the WAL position are also replayed to the walsender's
node (where the walsender is running). After receiving the new feedback
message, the walsender will advance the slot.xmin based on the flush info,
similar to the advancement of catalog_xmin. Currently, the effective_xmin/xmin
of logical slot are unused during logical replication, so I think it's safe and
won't cause side-effect to reuse the xmin for this feature.

We have introduced a new subscription option (feedback_slots='slot1,...'),
where these slots will be used to check condition (b): the transactions on
logical standbys occurring before the replay of Node A's DELETE are replayed on
Node A as well. Therefore, on Node B, users should specify the slots
corresponding to Node A in this option. The apply worker will get the oldest
confirmed flush LSN among the specified slots and send the LSN as a feedback
message to the walsender. -- I also thought of making it an automaic way, e.g.
let apply worker select the slots that acquired by the walsenders which connect
to the same remote server(e.g. if apply worker's connection info or some other
flags is same as the walsender's connection info). But it seems tricky because
if some slots are inactive which means the walsenders are not there, the apply
worker could not find the correct slots to check unless we save the host along
with the slot's persistence data.

The new feedback message is sent only if feedback_slots is not NULL. If the
slots in feedback_slots are removed, a final message containing
InvalidXLogRecPtr will be sent to inform the walsender to forget about the
slot.xmin.

To detect update_deleted conflicts during update operations, if the target row
cannot be found, we perform an additional scan of the table using snapshotAny.
This scan aims to locate the most recently deleted row that matches the old
column values from the remote update operation and has not yet been removed by
VACUUM. If any such tuples are found, we report the update_deleted conflict
along with the origin and transaction information that deleted the tuple.

Please refer to the attached POC patch set which implements above design. The
patch set is split into some parts to make it easier for the initial review.
Please note that each patch is interdependent and cannot work independently.

Thanks a lot to Kuroda-San and Amit for the off-list discussion.

Suggestions and comments are highly appreciated !

[1]: /messages/by-id/CAJpy0uCov4JfZJeOvY0O21_gk9bcgNUDp4jf8+BbMp+EAv8cVQ@mail.gmail.com
[2]: /messages/by-id/CAA4eK1Lj-PWrP789KnKxZydisHajd38rSihWXO8MVBLDwxG1Kg@mail.gmail.com
[3]: /messages/by-id/CAJpy0uC6Zs5WwwiyuvG_kEB6Q3wyDWpya7PXm3SMT_YG=XJJ1w@mail.gmail.com

Best Regards,
Hou Zhijie

Attachments:

v21-0006-Add-a-tap-test-to-verify-the-new-slot-xmin-mecha.patchapplication/octet-stream; name=v21-0006-Add-a-tap-test-to-verify-the-new-slot-xmin-mecha.patchDownload+209-1
v21-0001-Maintain-and-Advance-slot.xmin-in-logical-walsen.patchapplication/octet-stream; name=v21-0001-Maintain-and-Advance-slot.xmin-in-logical-walsen.patchDownload+129-8
v21-0002-Add-a-subscription-option-feedback_slots.patchapplication/octet-stream; name=v21-0002-Add-a-subscription-option-feedback_slots.patchDownload+377-73
v21-0003-Send-the-slot-flush-feedback-message-via-apply-w.patchapplication/octet-stream; name=v21-0003-Send-the-slot-flush-feedback-message-via-apply-w.patchDownload+260-3
v21-0004-Support-the-conflict-detection-for-update_delete.patchapplication/octet-stream; name=v21-0004-Support-the-conflict-detection-for-update_delete.patchDownload+227-34
v21-0005-Support-copying-xmin-value-of-slots-during-slots.patchapplication/octet-stream; name=v21-0005-Support-copying-xmin-value-of-slots-during-slots.patchDownload+45-12
#2shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#1)
Re: Conflict detection for update_deleted in logical replication

On Thu, Sep 5, 2024 at 5:07 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

Hi hackers,

I am starting a new thread to discuss and propose the conflict detection for
update_deleted scenarios during logical replication. This conflict occurs when
the apply worker cannot find the target tuple to be updated, as the tuple might
have been removed by another origin.

---
BACKGROUND
---

Currently, when the apply worker cannot find the target tuple during an update,
an update_missing conflict is logged. However, to facilitate future automatic
conflict resolution, it has been agreed[1][2] that we need to detect both
update_missing and update_deleted conflicts. Specifically, we will detect an
update_deleted conflict if any dead tuple matching the old key value of the
update operation is found; otherwise, it will be classified as update_missing.

Detecting both update_deleted and update_missing conflicts is important for
achieving eventual consistency in a bidirectional cluster, because the
resolution for each conflict type can differs. For example, for an
update_missing conflict, a feasible solution might be converting the update to
an insert and applying it. While for an update_deleted conflict, the preferred
approach could be to skip the update or compare the timestamps of the delete
transactions with the remote update transaction's and choose the most recent
one. For additional context, please refer to [3], which gives examples about
how these differences could lead to data divergence.

---
ISSUES and SOLUTION
---

To detect update_deleted conflicts, we need to search for dead tuples in the
table. However, dead tuples can be removed by VACUUM at any time. Therefore, to
ensure consistent and accurate conflict detection, tuples deleted by other
origins must not be removed by VACUUM before the conflict detection process. If
the tuples are removed prematurely, it might lead to incorrect conflict
identification and resolution, causing data divergence between nodes.

Here is an example of how VACUUM could affect conflict detection and how to
prevent this issue. Assume we have a bidirectional cluster with two nodes, A
and B.

Node A:
T1: INSERT INTO t (id, value) VALUES (1,1);
T2: DELETE FROM t WHERE id = 1;

Node B:
T3: UPDATE t SET value = 2 WHERE id = 1;

To retain the deleted tuples, the initial idea was that once transaction T2 had
been applied to both nodes, there was no longer a need to preserve the dead
tuple on Node A. However, a scenario arises where transactions T3 and T2 occur
concurrently, with T3 committing slightly earlier than T2. In this case, if
Node B applies T2 and Node A removes the dead tuple (1,1) via VACUUM, and then
Node A applies T3 after the VACUUM operation, it can only result in an
update_missing conflict. Given that the default resolution for update_missing
conflicts is apply_or_skip (e.g. convert update to insert if possible and apply
the insert), Node A will eventually hold a row (1,2) while Node B becomes
empty, causing data inconsistency.

Therefore, the strategy needs to be expanded as follows: Node A cannot remove
the dead tuple until:
(a) The DELETE operation is replayed on all remote nodes, *AND*
(b) The transactions on logical standbys occurring before the replay of Node
A's DELETE are replayed on Node A as well.

---
THE DESIGN
---

To achieve the above, we plan to allow the logical walsender to maintain and
advance the slot.xmin to protect the data in the user table and introduce a new
logical standby feedback message. This message reports the WAL position that
has been replayed on the logical standby *AND* the changes occurring on the
logical standby before the WAL position are also replayed to the walsender's
node (where the walsender is running). After receiving the new feedback
message, the walsender will advance the slot.xmin based on the flush info,
similar to the advancement of catalog_xmin. Currently, the effective_xmin/xmin
of logical slot are unused during logical replication, so I think it's safe and
won't cause side-effect to reuse the xmin for this feature.

We have introduced a new subscription option (feedback_slots='slot1,...'),
where these slots will be used to check condition (b): the transactions on
logical standbys occurring before the replay of Node A's DELETE are replayed on
Node A as well. Therefore, on Node B, users should specify the slots
corresponding to Node A in this option. The apply worker will get the oldest
confirmed flush LSN among the specified slots and send the LSN as a feedback
message to the walsender. -- I also thought of making it an automaic way, e.g.
let apply worker select the slots that acquired by the walsenders which connect
to the same remote server(e.g. if apply worker's connection info or some other
flags is same as the walsender's connection info). But it seems tricky because
if some slots are inactive which means the walsenders are not there, the apply
worker could not find the correct slots to check unless we save the host along
with the slot's persistence data.

The new feedback message is sent only if feedback_slots is not NULL. If the
slots in feedback_slots are removed, a final message containing
InvalidXLogRecPtr will be sent to inform the walsender to forget about the
slot.xmin.

To detect update_deleted conflicts during update operations, if the target row
cannot be found, we perform an additional scan of the table using snapshotAny.
This scan aims to locate the most recently deleted row that matches the old
column values from the remote update operation and has not yet been removed by
VACUUM. If any such tuples are found, we report the update_deleted conflict
along with the origin and transaction information that deleted the tuple.

Please refer to the attached POC patch set which implements above design. The
patch set is split into some parts to make it easier for the initial review.
Please note that each patch is interdependent and cannot work independently.

Thanks a lot to Kuroda-San and Amit for the off-list discussion.

Suggestions and comments are highly appreciated !

Thank You Hou-San for explaining the design. But to make it easier to
understand, would you be able to explain the sequence/timeline of the
*new* actions performed by the walsender and the apply processes for
the given example along with new feedback_slot config needed

Node A: (Procs: walsenderA, applyA)
T1: INSERT INTO t (id, value) VALUES (1,1); ts=10.00 AM
T2: DELETE FROM t WHERE id = 1; ts=10.02 AM

Node B: (Procs: walsenderB, applyB)
T3: UPDATE t SET value = 2 WHERE id = 1; ts=10.01 AM

thanks
Shveta

#3Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#2)
RE: Conflict detection for update_deleted in logical replication

On Tuesday, September 10, 2024 2:45 PM shveta malik <shveta.malik@gmail.com> wrote:

---
THE DESIGN
---

To achieve the above, we plan to allow the logical walsender to
maintain and advance the slot.xmin to protect the data in the user
table and introduce a new logical standby feedback message. This
message reports the WAL position that has been replayed on the logical
standby *AND* the changes occurring on the logical standby before the
WAL position are also replayed to the walsender's node (where the
walsender is running). After receiving the new feedback message, the
walsender will advance the slot.xmin based on the flush info, similar
to the advancement of catalog_xmin. Currently, the effective_xmin/xmin
of logical slot are unused during logical replication, so I think it's safe and

won't cause side-effect to reuse the xmin for this feature.

We have introduced a new subscription option
(feedback_slots='slot1,...'), where these slots will be used to check
condition (b): the transactions on logical standbys occurring before
the replay of Node A's DELETE are replayed on Node A as well.
Therefore, on Node B, users should specify the slots corresponding to
Node A in this option. The apply worker will get the oldest confirmed
flush LSN among the specified slots and send the LSN as a feedback

message to the walsender. -- I also thought of making it an automaic way, e.g.

let apply worker select the slots that acquired by the walsenders
which connect to the same remote server(e.g. if apply worker's
connection info or some other flags is same as the walsender's
connection info). But it seems tricky because if some slots are
inactive which means the walsenders are not there, the apply worker
could not find the correct slots to check unless we save the host along with

the slot's persistence data.

The new feedback message is sent only if feedback_slots is not NULL.
If the slots in feedback_slots are removed, a final message containing
InvalidXLogRecPtr will be sent to inform the walsender to forget about
the slot.xmin.

To detect update_deleted conflicts during update operations, if the
target row cannot be found, we perform an additional scan of the table using

snapshotAny.

This scan aims to locate the most recently deleted row that matches
the old column values from the remote update operation and has not yet
been removed by VACUUM. If any such tuples are found, we report the
update_deleted conflict along with the origin and transaction information

that deleted the tuple.

Please refer to the attached POC patch set which implements above
design. The patch set is split into some parts to make it easier for the initial

review.

Please note that each patch is interdependent and cannot work

independently.

Thanks a lot to Kuroda-San and Amit for the off-list discussion.

Suggestions and comments are highly appreciated !

Thank You Hou-San for explaining the design. But to make it easier to
understand, would you be able to explain the sequence/timeline of the
*new* actions performed by the walsender and the apply processes for the
given example along with new feedback_slot config needed

Node A: (Procs: walsenderA, applyA)
T1: INSERT INTO t (id, value) VALUES (1,1); ts=10.00 AM
T2: DELETE FROM t WHERE id = 1; ts=10.02 AM

Node B: (Procs: walsenderB, applyB)
T3: UPDATE t SET value = 2 WHERE id = 1; ts=10.01 AM

Thanks for reviewing! Let me elaborate further on the example:

On node A, feedback_slots should include the logical slot that used to replicate changes
from Node A to Node B. On node B, feedback_slots should include the logical
slot that replicate changes from Node B to Node A.

Assume the slot.xmin on Node A has been initialized to a valid number(740) before the
following flow:

Node A executed T1 - 10.00 AM
T1 replicated and applied on Node B - 10.0001 AM
Node B executed T3 - 10.01 AM
Node A executed T2 (741) - 10.02 AM
T2 replicated and applied on Node B (delete_missing) - 10.03 AM
T3 replicated and applied on Node A (new action, detect update_deleted) - 10.04 AM

(new action) Apply worker on Node B has confirmed that T2 has been applied
locally and the transactions before T2 (e.g., T3) has been replicated and
applied to Node A (e.g. feedback_slot.confirmed_flush_lsn >= lsn of the local
replayed T2), thus send the new feedback message to Node A. - 10.05 AM

(new action) Walsender on Node A received the message and would advance the slot.xmin.- 10.06 AM

Then, after the slot.xmin is advanced to a number greater than 741, the VACUUM would be able to
remove the dead tuple on Node A.

Best Regards,
Hou zj

#4shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#3)
Re: Conflict detection for update_deleted in logical replication

On Tue, Sep 10, 2024 at 1:40 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

On Tuesday, September 10, 2024 2:45 PM shveta malik <shveta.malik@gmail.com> wrote:

---
THE DESIGN
---

To achieve the above, we plan to allow the logical walsender to
maintain and advance the slot.xmin to protect the data in the user
table and introduce a new logical standby feedback message. This
message reports the WAL position that has been replayed on the logical
standby *AND* the changes occurring on the logical standby before the
WAL position are also replayed to the walsender's node (where the
walsender is running). After receiving the new feedback message, the
walsender will advance the slot.xmin based on the flush info, similar
to the advancement of catalog_xmin. Currently, the effective_xmin/xmin
of logical slot are unused during logical replication, so I think it's safe and

won't cause side-effect to reuse the xmin for this feature.

We have introduced a new subscription option
(feedback_slots='slot1,...'), where these slots will be used to check
condition (b): the transactions on logical standbys occurring before
the replay of Node A's DELETE are replayed on Node A as well.
Therefore, on Node B, users should specify the slots corresponding to
Node A in this option. The apply worker will get the oldest confirmed
flush LSN among the specified slots and send the LSN as a feedback

message to the walsender. -- I also thought of making it an automaic way, e.g.

let apply worker select the slots that acquired by the walsenders
which connect to the same remote server(e.g. if apply worker's
connection info or some other flags is same as the walsender's
connection info). But it seems tricky because if some slots are
inactive which means the walsenders are not there, the apply worker
could not find the correct slots to check unless we save the host along with

the slot's persistence data.

The new feedback message is sent only if feedback_slots is not NULL.
If the slots in feedback_slots are removed, a final message containing
InvalidXLogRecPtr will be sent to inform the walsender to forget about
the slot.xmin.

To detect update_deleted conflicts during update operations, if the
target row cannot be found, we perform an additional scan of the table using

snapshotAny.

This scan aims to locate the most recently deleted row that matches
the old column values from the remote update operation and has not yet
been removed by VACUUM. If any such tuples are found, we report the
update_deleted conflict along with the origin and transaction information

that deleted the tuple.

Please refer to the attached POC patch set which implements above
design. The patch set is split into some parts to make it easier for the initial

review.

Please note that each patch is interdependent and cannot work

independently.

Thanks a lot to Kuroda-San and Amit for the off-list discussion.

Suggestions and comments are highly appreciated !

Thank You Hou-San for explaining the design. But to make it easier to
understand, would you be able to explain the sequence/timeline of the
*new* actions performed by the walsender and the apply processes for the
given example along with new feedback_slot config needed

Node A: (Procs: walsenderA, applyA)
T1: INSERT INTO t (id, value) VALUES (1,1); ts=10.00 AM
T2: DELETE FROM t WHERE id = 1; ts=10.02 AM

Node B: (Procs: walsenderB, applyB)
T3: UPDATE t SET value = 2 WHERE id = 1; ts=10.01 AM

Thanks for reviewing! Let me elaborate further on the example:

On node A, feedback_slots should include the logical slot that used to replicate changes
from Node A to Node B. On node B, feedback_slots should include the logical
slot that replicate changes from Node B to Node A.

Assume the slot.xmin on Node A has been initialized to a valid number(740) before the
following flow:

Node A executed T1 - 10.00 AM
T1 replicated and applied on Node B - 10.0001 AM
Node B executed T3 - 10.01 AM
Node A executed T2 (741) - 10.02 AM
T2 replicated and applied on Node B (delete_missing) - 10.03 AM

Not related to this feature, but do you mean delete_origin_differ here?

T3 replicated and applied on Node A (new action, detect update_deleted) - 10.04 AM

(new action) Apply worker on Node B has confirmed that T2 has been applied
locally and the transactions before T2 (e.g., T3) has been replicated and
applied to Node A (e.g. feedback_slot.confirmed_flush_lsn >= lsn of the local
replayed T2), thus send the new feedback message to Node A. - 10.05 AM

(new action) Walsender on Node A received the message and would advance the slot.xmin.- 10.06 AM

Then, after the slot.xmin is advanced to a number greater than 741, the VACUUM would be able to
remove the dead tuple on Node A.

Thanks for the example. Can you please review below and let me know if
my understanding is correct.

1)
In a bidirectional replication setup, the user has to create slots in
a way that NodeA's sub's slot is Node B's feedback_slot and Node B's
sub's slot is Node A's feedback slot. And then only this feature will
work well, is it correct to say?

2)
Now coming back to multiple feedback_slots in a subscription, is the
below correct:

Say Node A has publications and subscriptions as follow:
------------------
A_pub1

A_sub1 (subscribing to B_pub1 with the default slot_name of A_sub1)
A_sub2 (subscribing to B_pub2 with the default slot_name of A_sub2)
A_sub3 (subscribing to B_pub3 with the default slot_name of A_sub3)

Say Node B has publications and subscriptions as follow:
------------------
B_sub1 (subscribing to A_pub1 with the default slot_name of B_sub1)

B_pub1
B_pub2
B_pub3

Then what will be the feedback_slot configuration for all
subscriptions of A and B? Is below correct:
------------------
A_sub1, A_sub2, A_sub3: feedback_slots=B_sub1
B_sub1: feedback_slots=A_sub1,A_sub2, A_sub3

3)
If the above is true, then do we have a way to make sure that the user
has given this configuration exactly the above way? If users end up
giving feedback_slots as some random slot (say A_slot4 or incomplete
list), do we validate that? (I have not looked at code yet, just
trying to understand design first).

4)
Now coming to this:

The apply worker will get the oldest
confirmed flush LSN among the specified slots and send the LSN as a feedback
message to the walsender.

There will be one apply worker on B which will be due to B_sub1, so
will it check confirmed_lsn of all slots A_sub1,A_sub2, A_sub3? Won't
it be sufficient to check confimed_lsn of say slot A_sub1 alone which
has subscribed to table 't' on which delete has been performed? Rest
of the lots (A_sub2, A_sub3) might have subscribed to different
tables?

thanks
Shveta

#5Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#4)
RE: Conflict detection for update_deleted in logical replication

On Tuesday, September 10, 2024 5:56 PM shveta malik <shveta.malik@gmail.com> wrote:

On Tue, Sep 10, 2024 at 1:40 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>
wrote:

On Tuesday, September 10, 2024 2:45 PM shveta malik

<shveta.malik@gmail.com> wrote:

Thank You Hou-San for explaining the design. But to make it easier
to understand, would you be able to explain the sequence/timeline of
the
*new* actions performed by the walsender and the apply processes for
the given example along with new feedback_slot config needed

Node A: (Procs: walsenderA, applyA)
T1: INSERT INTO t (id, value) VALUES (1,1); ts=10.00 AM
T2: DELETE FROM t WHERE id = 1; ts=10.02 AM

Node B: (Procs: walsenderB, applyB)
T3: UPDATE t SET value = 2 WHERE id = 1; ts=10.01 AM

Thanks for reviewing! Let me elaborate further on the example:

On node A, feedback_slots should include the logical slot that used to
replicate changes from Node A to Node B. On node B, feedback_slots
should include the logical slot that replicate changes from Node B to Node A.

Assume the slot.xmin on Node A has been initialized to a valid
number(740) before the following flow:

Node A executed T1 - 10.00 AM
T1 replicated and applied on Node B - 10.0001 AM
Node B executed T3 - 10.01 AM
Node A executed T2 (741) - 10.02 AM
T2 replicated and applied on Node B (delete_missing) - 10.03 AM

Not related to this feature, but do you mean delete_origin_differ here?

Oh sorry, It's a miss. I meant delete_origin_differ.

T3 replicated and applied on Node A (new action, detect

update_deleted) - 10.04 AM

(new action) Apply worker on Node B has confirmed that T2 has been
applied locally and the transactions before T2 (e.g., T3) has been
replicated and applied to Node A (e.g. feedback_slot.confirmed_flush_lsn
= lsn of the local
replayed T2), thus send the new feedback message to Node A.

- 10.05 AM

(new action) Walsender on Node A received the message and would
advance the slot.xmin.- 10.06 AM

Then, after the slot.xmin is advanced to a number greater than 741,
the VACUUM would be able to remove the dead tuple on Node A.

Thanks for the example. Can you please review below and let me know if my
understanding is correct.

1)
In a bidirectional replication setup, the user has to create slots in a way that
NodeA's sub's slot is Node B's feedback_slot and Node B's sub's slot is Node
A's feedback slot. And then only this feature will work well, is it correct to say?

Yes, your understanding is correct.

2)
Now coming back to multiple feedback_slots in a subscription, is the below
correct:

Say Node A has publications and subscriptions as follow:
------------------
A_pub1

A_sub1 (subscribing to B_pub1 with the default slot_name of A_sub1)
A_sub2 (subscribing to B_pub2 with the default slot_name of A_sub2)
A_sub3 (subscribing to B_pub3 with the default slot_name of A_sub3)

Say Node B has publications and subscriptions as follow:
------------------
B_sub1 (subscribing to A_pub1 with the default slot_name of B_sub1)

B_pub1
B_pub2
B_pub3

Then what will be the feedback_slot configuration for all subscriptions of A and
B? Is below correct:
------------------
A_sub1, A_sub2, A_sub3: feedback_slots=B_sub1
B_sub1: feedback_slots=A_sub1,A_sub2, A_sub3

Right. The above configurations are correct.

3)
If the above is true, then do we have a way to make sure that the user has
given this configuration exactly the above way? If users end up giving
feedback_slots as some random slot (say A_slot4 or incomplete list), do we
validate that? (I have not looked at code yet, just trying to understand design
first).

The patch doesn't validate if the feedback slots belong to the correct
subscriptions on remote server. It only validates if the slot is an existing,
valid, logical slot. I think there are few challenges to validate it further.
E.g. We need a way to identify the which server the slot is replicating
changes to, which could be tricky as the slot currently doesn't have any info
to identify the remote server. Besides, the slot could be inactive temporarily
due to some subscriber side error, in which case we cannot verify the
subscription that used it.

4)
Now coming to this:

The apply worker will get the oldest
confirmed flush LSN among the specified slots and send the LSN as a
feedback message to the walsender.

There will be one apply worker on B which will be due to B_sub1, so will it
check confirmed_lsn of all slots A_sub1,A_sub2, A_sub3? Won't it be
sufficient to check confimed_lsn of say slot A_sub1 alone which has
subscribed to table 't' on which delete has been performed? Rest of the lots
(A_sub2, A_sub3) might have subscribed to different tables?

I think it's theoretically correct to only check the A_sub1. We could document
that user can do this by identifying the tables that each subscription
replicates, but it may not be user friendly.

Best Regards,
Hou zj

#6Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#1)
Re: Conflict detection for update_deleted in logical replication

On Thu, Sep 5, 2024 at 5:07 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

---
ISSUES and SOLUTION
---

To detect update_deleted conflicts, we need to search for dead tuples in the
table. However, dead tuples can be removed by VACUUM at any time. Therefore, to
ensure consistent and accurate conflict detection, tuples deleted by other
origins must not be removed by VACUUM before the conflict detection process. If
the tuples are removed prematurely, it might lead to incorrect conflict
identification and resolution, causing data divergence between nodes.

Here is an example of how VACUUM could affect conflict detection and how to
prevent this issue. Assume we have a bidirectional cluster with two nodes, A
and B.

Node A:
T1: INSERT INTO t (id, value) VALUES (1,1);
T2: DELETE FROM t WHERE id = 1;

Node B:
T3: UPDATE t SET value = 2 WHERE id = 1;

To retain the deleted tuples, the initial idea was that once transaction T2 had
been applied to both nodes, there was no longer a need to preserve the dead
tuple on Node A. However, a scenario arises where transactions T3 and T2 occur
concurrently, with T3 committing slightly earlier than T2. In this case, if
Node B applies T2 and Node A removes the dead tuple (1,1) via VACUUM, and then
Node A applies T3 after the VACUUM operation, it can only result in an
update_missing conflict. Given that the default resolution for update_missing
conflicts is apply_or_skip (e.g. convert update to insert if possible and apply
the insert), Node A will eventually hold a row (1,2) while Node B becomes
empty, causing data inconsistency.

Therefore, the strategy needs to be expanded as follows: Node A cannot remove
the dead tuple until:
(a) The DELETE operation is replayed on all remote nodes, *AND*
(b) The transactions on logical standbys occurring before the replay of Node
A's DELETE are replayed on Node A as well.

---
THE DESIGN
---

To achieve the above, we plan to allow the logical walsender to maintain and
advance the slot.xmin to protect the data in the user table and introduce a new
logical standby feedback message. This message reports the WAL position that
has been replayed on the logical standby *AND* the changes occurring on the
logical standby before the WAL position are also replayed to the walsender's
node (where the walsender is running). After receiving the new feedback
message, the walsender will advance the slot.xmin based on the flush info,
similar to the advancement of catalog_xmin. Currently, the effective_xmin/xmin
of logical slot are unused during logical replication, so I think it's safe and
won't cause side-effect to reuse the xmin for this feature.

We have introduced a new subscription option (feedback_slots='slot1,...'),
where these slots will be used to check condition (b): the transactions on
logical standbys occurring before the replay of Node A's DELETE are replayed on
Node A as well. Therefore, on Node B, users should specify the slots
corresponding to Node A in this option. The apply worker will get the oldest
confirmed flush LSN among the specified slots and send the LSN as a feedback
message to the walsender. -- I also thought of making it an automaic way, e.g.
let apply worker select the slots that acquired by the walsenders which connect
to the same remote server(e.g. if apply worker's connection info or some other
flags is same as the walsender's connection info). But it seems tricky because
if some slots are inactive which means the walsenders are not there, the apply
worker could not find the correct slots to check unless we save the host along
with the slot's persistence data.

The new feedback message is sent only if feedback_slots is not NULL.

Don't you need to deal with versioning stuff for sending this new
message? I mean what if the receiver side of this message is old and
doesn't support this new message.

One minor comment on 0003
=======================
1.
get_slot_confirmed_flush()
{
...
+ /*
+ * To prevent concurrent slot dropping and creation while filtering the
+ * slots, take the ReplicationSlotControlLock outside of the loop.
+ */
+ LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
+
+ foreach_ptr(String, name, MySubscription->feedback_slots)
+ {
+ XLogRecPtr confirmed_flush;
+ ReplicationSlot *slot;
+
+ slot = ValidateAndGetFeedbackSlot(strVal(name));

Why do we need to validate slots each time here? Isn't it better to do it once?

--
With Regards,
Amit Kapila.

#7Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#6)
RE: Conflict detection for update_deleted in logical replication

On Tuesday, September 10, 2024 7:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Sep 5, 2024 at 5:07 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>
wrote:

---
ISSUES and SOLUTION
---

To detect update_deleted conflicts, we need to search for dead tuples
in the table. However, dead tuples can be removed by VACUUM at any
time. Therefore, to ensure consistent and accurate conflict detection,
tuples deleted by other origins must not be removed by VACUUM before
the conflict detection process. If the tuples are removed prematurely,
it might lead to incorrect conflict identification and resolution, causing data

divergence between nodes.

Here is an example of how VACUUM could affect conflict detection and
how to prevent this issue. Assume we have a bidirectional cluster with
two nodes, A and B.

Node A:
T1: INSERT INTO t (id, value) VALUES (1,1);
T2: DELETE FROM t WHERE id = 1;

Node B:
T3: UPDATE t SET value = 2 WHERE id = 1;

To retain the deleted tuples, the initial idea was that once
transaction T2 had been applied to both nodes, there was no longer a
need to preserve the dead tuple on Node A. However, a scenario arises
where transactions T3 and T2 occur concurrently, with T3 committing
slightly earlier than T2. In this case, if Node B applies T2 and Node
A removes the dead tuple (1,1) via VACUUM, and then Node A applies T3
after the VACUUM operation, it can only result in an update_missing
conflict. Given that the default resolution for update_missing
conflicts is apply_or_skip (e.g. convert update to insert if possible
and apply the insert), Node A will eventually hold a row (1,2) while Node B

becomes empty, causing data inconsistency.

Therefore, the strategy needs to be expanded as follows: Node A cannot
remove the dead tuple until:
(a) The DELETE operation is replayed on all remote nodes, *AND*
(b) The transactions on logical standbys occurring before the replay
of Node A's DELETE are replayed on Node A as well.

---
THE DESIGN
---

To achieve the above, we plan to allow the logical walsender to
maintain and advance the slot.xmin to protect the data in the user
table and introduce a new logical standby feedback message. This
message reports the WAL position that has been replayed on the logical
standby *AND* the changes occurring on the logical standby before the
WAL position are also replayed to the walsender's node (where the
walsender is running). After receiving the new feedback message, the
walsender will advance the slot.xmin based on the flush info, similar
to the advancement of catalog_xmin. Currently, the effective_xmin/xmin
of logical slot are unused during logical replication, so I think it's safe and

won't cause side-effect to reuse the xmin for this feature.

We have introduced a new subscription option
(feedback_slots='slot1,...'), where these slots will be used to check
condition (b): the transactions on logical standbys occurring before
the replay of Node A's DELETE are replayed on Node A as well.
Therefore, on Node B, users should specify the slots corresponding to
Node A in this option. The apply worker will get the oldest confirmed
flush LSN among the specified slots and send the LSN as a feedback

message to the walsender. -- I also thought of making it an automaic way, e.g.

let apply worker select the slots that acquired by the walsenders
which connect to the same remote server(e.g. if apply worker's
connection info or some other flags is same as the walsender's
connection info). But it seems tricky because if some slots are
inactive which means the walsenders are not there, the apply worker
could not find the correct slots to check unless we save the host along with

the slot's persistence data.

The new feedback message is sent only if feedback_slots is not NULL.

Don't you need to deal with versioning stuff for sending this new message? I
mean what if the receiver side of this message is old and doesn't support this
new message.

Yes, I think we can avoid sending the new message if the remote server version
doesn't support handling this message (e.g. server_version < 18). Will address
this in next version.

One minor comment on 0003
=======================
1.
get_slot_confirmed_flush()
{
...
+ /*
+ * To prevent concurrent slot dropping and creation while filtering the
+ * slots, take the ReplicationSlotControlLock outside of the loop.
+ */
+ LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
+
+ foreach_ptr(String, name, MySubscription->feedback_slots) { XLogRecPtr
+ confirmed_flush; ReplicationSlot *slot;
+
+ slot = ValidateAndGetFeedbackSlot(strVal(name));

Why do we need to validate slots each time here? Isn't it better to do it once?

I think it's possible that the slot was correct but changed or dropped later,
so it could be useful to give a warning in this case to hint user to adjust the
slots, otherwise, the xmin of the publisher's slot won't be advanced and might
cause dead tuples accumulation. This is similar to the checks we performed for
the slots in "synchronized_standby_slots". (E.g. StandbySlotsHaveCaughtup)

Best Regards,
Hou zj

#8shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#5)
Re: Conflict detection for update_deleted in logical replication

On Tue, Sep 10, 2024 at 4:30 PM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

On Tuesday, September 10, 2024 5:56 PM shveta malik <shveta.malik@gmail.com> wrote:

On Tue, Sep 10, 2024 at 1:40 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>
wrote:

On Tuesday, September 10, 2024 2:45 PM shveta malik

<shveta.malik@gmail.com> wrote:

Thank You Hou-San for explaining the design. But to make it easier
to understand, would you be able to explain the sequence/timeline of
the
*new* actions performed by the walsender and the apply processes for
the given example along with new feedback_slot config needed

Node A: (Procs: walsenderA, applyA)
T1: INSERT INTO t (id, value) VALUES (1,1); ts=10.00 AM
T2: DELETE FROM t WHERE id = 1; ts=10.02 AM

Node B: (Procs: walsenderB, applyB)
T3: UPDATE t SET value = 2 WHERE id = 1; ts=10.01 AM

Thanks for reviewing! Let me elaborate further on the example:

On node A, feedback_slots should include the logical slot that used to
replicate changes from Node A to Node B. On node B, feedback_slots
should include the logical slot that replicate changes from Node B to Node A.

Assume the slot.xmin on Node A has been initialized to a valid
number(740) before the following flow:

Node A executed T1 - 10.00 AM
T1 replicated and applied on Node B - 10.0001 AM
Node B executed T3 - 10.01 AM
Node A executed T2 (741) - 10.02 AM
T2 replicated and applied on Node B (delete_missing) - 10.03 AM

Not related to this feature, but do you mean delete_origin_differ here?

Oh sorry, It's a miss. I meant delete_origin_differ.

T3 replicated and applied on Node A (new action, detect

update_deleted) - 10.04 AM

(new action) Apply worker on Node B has confirmed that T2 has been
applied locally and the transactions before T2 (e.g., T3) has been
replicated and applied to Node A (e.g. feedback_slot.confirmed_flush_lsn
= lsn of the local
replayed T2), thus send the new feedback message to Node A.

- 10.05 AM

(new action) Walsender on Node A received the message and would
advance the slot.xmin.- 10.06 AM

Then, after the slot.xmin is advanced to a number greater than 741,
the VACUUM would be able to remove the dead tuple on Node A.

Thanks for the example. Can you please review below and let me know if my
understanding is correct.

1)
In a bidirectional replication setup, the user has to create slots in a way that
NodeA's sub's slot is Node B's feedback_slot and Node B's sub's slot is Node
A's feedback slot. And then only this feature will work well, is it correct to say?

Yes, your understanding is correct.

2)
Now coming back to multiple feedback_slots in a subscription, is the below
correct:

Say Node A has publications and subscriptions as follow:
------------------
A_pub1

A_sub1 (subscribing to B_pub1 with the default slot_name of A_sub1)
A_sub2 (subscribing to B_pub2 with the default slot_name of A_sub2)
A_sub3 (subscribing to B_pub3 with the default slot_name of A_sub3)

Say Node B has publications and subscriptions as follow:
------------------
B_sub1 (subscribing to A_pub1 with the default slot_name of B_sub1)

B_pub1
B_pub2
B_pub3

Then what will be the feedback_slot configuration for all subscriptions of A and
B? Is below correct:
------------------
A_sub1, A_sub2, A_sub3: feedback_slots=B_sub1
B_sub1: feedback_slots=A_sub1,A_sub2, A_sub3

Right. The above configurations are correct.

Okay. It seems difficult to understand configuration from user's perspective.

3)
If the above is true, then do we have a way to make sure that the user has
given this configuration exactly the above way? If users end up giving
feedback_slots as some random slot (say A_slot4 or incomplete list), do we
validate that? (I have not looked at code yet, just trying to understand design
first).

The patch doesn't validate if the feedback slots belong to the correct
subscriptions on remote server. It only validates if the slot is an existing,
valid, logical slot. I think there are few challenges to validate it further.
E.g. We need a way to identify the which server the slot is replicating
changes to, which could be tricky as the slot currently doesn't have any info
to identify the remote server. Besides, the slot could be inactive temporarily
due to some subscriber side error, in which case we cannot verify the
subscription that used it.

Okay, I understand the challenges here.

4)
Now coming to this:

The apply worker will get the oldest
confirmed flush LSN among the specified slots and send the LSN as a
feedback message to the walsender.

There will be one apply worker on B which will be due to B_sub1, so will it
check confirmed_lsn of all slots A_sub1,A_sub2, A_sub3? Won't it be
sufficient to check confimed_lsn of say slot A_sub1 alone which has
subscribed to table 't' on which delete has been performed? Rest of the lots
(A_sub2, A_sub3) might have subscribed to different tables?

I think it's theoretically correct to only check the A_sub1. We could document
that user can do this by identifying the tables that each subscription
replicates, but it may not be user friendly.

Sorry, I fail to understand how user can identify the tables and give
feedback_slots accordingly? I thought feedback_slots is a one time
configuration when replication is setup (or say setup changes in
future); it can not keep on changing with each query. Or am I missing
something?

IMO, it is something which should be identified internally. Since the
query is on table 't1', feedback-slot which is for 't1' shall be used
to check lsn. But on rethinking,this optimization may not be worth the
effort, the identification part could be tricky, so it might be okay
to check all the slots.

~~

Another query is about 3 node setup. I couldn't figure out what would
be feedback_slots setting when it is not bidirectional, as in consider
the case where there are three nodes A,B,C. Node C is subscribing to
both Node A and Node B. Node A and Node B are the ones doing
concurrent "update" and "delete" which will both be replicated to Node
C. In this case what will be the feedback_slots setting on Node C? We
don't have any slots here which will be replicating changes from Node
C to Node A and Node C to Node B. This is given in [3] in your first
email ([1]/messages/by-id/OS0PR01MB5716BE80DAEB0EE2A6A5D1F5949D2@OS0PR01MB5716.jpnprd01.prod.outlook.com)

[1]: /messages/by-id/OS0PR01MB5716BE80DAEB0EE2A6A5D1F5949D2@OS0PR01MB5716.jpnprd01.prod.outlook.com
/messages/by-id/OS0PR01MB5716BE80DAEB0EE2A6A5D1F5949D2@OS0PR01MB5716.jpnprd01.prod.outlook.com

thanks
Shveta

#9Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#8)
RE: Conflict detection for update_deleted in logical replication

On Wednesday, September 11, 2024 12:18 PM shveta malik <shveta.malik@gmail.com> wrote:

On Tue, Sep 10, 2024 at 4:30 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>
wrote:

On Tuesday, September 10, 2024 5:56 PM shveta malik

<shveta.malik@gmail.com> wrote:

Thanks for the example. Can you please review below and let me know
if my understanding is correct.

1)
In a bidirectional replication setup, the user has to create slots
in a way that NodeA's sub's slot is Node B's feedback_slot and Node
B's sub's slot is Node A's feedback slot. And then only this feature will

work well, is it correct to say?

Yes, your understanding is correct.

2)
Now coming back to multiple feedback_slots in a subscription, is the
below
correct:

Say Node A has publications and subscriptions as follow:
------------------
A_pub1

A_sub1 (subscribing to B_pub1 with the default slot_name of A_sub1)
A_sub2 (subscribing to B_pub2 with the default slot_name of A_sub2)
A_sub3 (subscribing to B_pub3 with the default slot_name of A_sub3)

Say Node B has publications and subscriptions as follow:
------------------
B_sub1 (subscribing to A_pub1 with the default slot_name of B_sub1)

B_pub1
B_pub2
B_pub3

Then what will be the feedback_slot configuration for all
subscriptions of A and B? Is below correct:
------------------
A_sub1, A_sub2, A_sub3: feedback_slots=B_sub1
B_sub1: feedback_slots=A_sub1,A_sub2, A_sub3

Right. The above configurations are correct.

Okay. It seems difficult to understand configuration from user's perspective.

Right. I think we could give an example in the document to make it clear.

3)
If the above is true, then do we have a way to make sure that the
user has given this configuration exactly the above way? If users
end up giving feedback_slots as some random slot (say A_slot4 or
incomplete list), do we validate that? (I have not looked at code
yet, just trying to understand design first).

The patch doesn't validate if the feedback slots belong to the correct
subscriptions on remote server. It only validates if the slot is an
existing, valid, logical slot. I think there are few challenges to validate it

further.

E.g. We need a way to identify the which server the slot is
replicating changes to, which could be tricky as the slot currently
doesn't have any info to identify the remote server. Besides, the slot
could be inactive temporarily due to some subscriber side error, in
which case we cannot verify the subscription that used it.

Okay, I understand the challenges here.

4)
Now coming to this:

The apply worker will get the oldest confirmed flush LSN among the
specified slots and send the LSN as a feedback message to the
walsender.

There will be one apply worker on B which will be due to B_sub1, so
will it check confirmed_lsn of all slots A_sub1,A_sub2, A_sub3?
Won't it be sufficient to check confimed_lsn of say slot A_sub1
alone which has subscribed to table 't' on which delete has been
performed? Rest of the lots (A_sub2, A_sub3) might have subscribed to

different tables?

I think it's theoretically correct to only check the A_sub1. We could
document that user can do this by identifying the tables that each
subscription replicates, but it may not be user friendly.

Sorry, I fail to understand how user can identify the tables and give
feedback_slots accordingly? I thought feedback_slots is a one time
configuration when replication is setup (or say setup changes in future); it can
not keep on changing with each query. Or am I missing something?

I meant that user have all the publication information(including the tables
added in a publication) that the subscription subscribes to, and could also
have the slot_name, so I think it's possible to identify the tables that each
subscription includes and add the feedback_slots correspondingly before
starting the replication. It would be pretty complicate although possible, so I
prefer to not mention it in the first place if it could not bring much
benefits.

IMO, it is something which should be identified internally. Since the query is on
table 't1', feedback-slot which is for 't1' shall be used to check lsn. But on
rethinking,this optimization may not be worth the effort, the identification part
could be tricky, so it might be okay to check all the slots.

I agree that identifying these internally would add complexity.

~~

Another query is about 3 node setup. I couldn't figure out what would be
feedback_slots setting when it is not bidirectional, as in consider the case
where there are three nodes A,B,C. Node C is subscribing to both Node A and
Node B. Node A and Node B are the ones doing concurrent "update" and
"delete" which will both be replicated to Node C. In this case what will be the
feedback_slots setting on Node C? We don't have any slots here which will be
replicating changes from Node C to Node A and Node C to Node B. This is given
in [3] in your first email ([1])

Thanks for pointing this, the link was a bit misleading. I think the solution
proposed in this thread is only used to allow detecting update_deleted reliably
in a bidirectional cluster. For non- bidirectional cases, it would be more
tricky to predict the timing till when should we retain the dead tuples.

[1]:
/messages/by-id/OS0PR01MB5716BE80DAEB0EE2A
6A5D1F5949D2%40OS0PR01MB5716.jpnprd01.prod.outlook.com

Best Regards,
Hou zj

#10shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#9)
Re: Conflict detection for update_deleted in logical replication

On Wed, Sep 11, 2024 at 10:15 AM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

On Wednesday, September 11, 2024 12:18 PM shveta malik <shveta.malik@gmail.com> wrote:

On Tue, Sep 10, 2024 at 4:30 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>
wrote:

On Tuesday, September 10, 2024 5:56 PM shveta malik

<shveta.malik@gmail.com> wrote:

Thanks for the example. Can you please review below and let me know
if my understanding is correct.

1)
In a bidirectional replication setup, the user has to create slots
in a way that NodeA's sub's slot is Node B's feedback_slot and Node
B's sub's slot is Node A's feedback slot. And then only this feature will

work well, is it correct to say?

Yes, your understanding is correct.

2)
Now coming back to multiple feedback_slots in a subscription, is the
below
correct:

Say Node A has publications and subscriptions as follow:
------------------
A_pub1

A_sub1 (subscribing to B_pub1 with the default slot_name of A_sub1)
A_sub2 (subscribing to B_pub2 with the default slot_name of A_sub2)
A_sub3 (subscribing to B_pub3 with the default slot_name of A_sub3)

Say Node B has publications and subscriptions as follow:
------------------
B_sub1 (subscribing to A_pub1 with the default slot_name of B_sub1)

B_pub1
B_pub2
B_pub3

Then what will be the feedback_slot configuration for all
subscriptions of A and B? Is below correct:
------------------
A_sub1, A_sub2, A_sub3: feedback_slots=B_sub1
B_sub1: feedback_slots=A_sub1,A_sub2, A_sub3

Right. The above configurations are correct.

Okay. It seems difficult to understand configuration from user's perspective.

Right. I think we could give an example in the document to make it clear.

3)
If the above is true, then do we have a way to make sure that the
user has given this configuration exactly the above way? If users
end up giving feedback_slots as some random slot (say A_slot4 or
incomplete list), do we validate that? (I have not looked at code
yet, just trying to understand design first).

The patch doesn't validate if the feedback slots belong to the correct
subscriptions on remote server. It only validates if the slot is an
existing, valid, logical slot. I think there are few challenges to validate it

further.

E.g. We need a way to identify the which server the slot is
replicating changes to, which could be tricky as the slot currently
doesn't have any info to identify the remote server. Besides, the slot
could be inactive temporarily due to some subscriber side error, in
which case we cannot verify the subscription that used it.

Okay, I understand the challenges here.

4)
Now coming to this:

The apply worker will get the oldest confirmed flush LSN among the
specified slots and send the LSN as a feedback message to the
walsender.

There will be one apply worker on B which will be due to B_sub1, so
will it check confirmed_lsn of all slots A_sub1,A_sub2, A_sub3?
Won't it be sufficient to check confimed_lsn of say slot A_sub1
alone which has subscribed to table 't' on which delete has been
performed? Rest of the lots (A_sub2, A_sub3) might have subscribed to

different tables?

I think it's theoretically correct to only check the A_sub1. We could
document that user can do this by identifying the tables that each
subscription replicates, but it may not be user friendly.

Sorry, I fail to understand how user can identify the tables and give
feedback_slots accordingly? I thought feedback_slots is a one time
configuration when replication is setup (or say setup changes in future); it can
not keep on changing with each query. Or am I missing something?

I meant that user have all the publication information(including the tables
added in a publication) that the subscription subscribes to, and could also
have the slot_name, so I think it's possible to identify the tables that each
subscription includes and add the feedback_slots correspondingly before
starting the replication. It would be pretty complicate although possible, so I
prefer to not mention it in the first place if it could not bring much
benefits.

IMO, it is something which should be identified internally. Since the query is on
table 't1', feedback-slot which is for 't1' shall be used to check lsn. But on
rethinking,this optimization may not be worth the effort, the identification part
could be tricky, so it might be okay to check all the slots.

I agree that identifying these internally would add complexity.

~~

Another query is about 3 node setup. I couldn't figure out what would be
feedback_slots setting when it is not bidirectional, as in consider the case
where there are three nodes A,B,C. Node C is subscribing to both Node A and
Node B. Node A and Node B are the ones doing concurrent "update" and
"delete" which will both be replicated to Node C. In this case what will be the
feedback_slots setting on Node C? We don't have any slots here which will be
replicating changes from Node C to Node A and Node C to Node B. This is given
in [3] in your first email ([1])

Thanks for pointing this, the link was a bit misleading. I think the solution
proposed in this thread is only used to allow detecting update_deleted reliably
in a bidirectional cluster. For non- bidirectional cases, it would be more
tricky to predict the timing till when should we retain the dead tuples.

So in brief, this solution is only for bidrectional setup? For
non-bidirectional, feedback_slots is non-configurable and thus
irrelevant.

Irrespective of above, if user ends up setting feedback_slot to some
random but existing slot which is not at all consuming changes, then
it may so happen that the node will never send feedback msg to another
node resulting in accumulation of dead tuples on another node. Is that
a possibility?

thanks
Shveta

#11Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#10)
RE: Conflict detection for update_deleted in logical replication

On Wednesday, September 11, 2024 1:03 PM shveta malik <shveta.malik@gmail.com> wrote:

On Wed, Sep 11, 2024 at 10:15 AM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

On Wednesday, September 11, 2024 12:18 PM shveta malik

<shveta.malik@gmail.com> wrote:

~~

Another query is about 3 node setup. I couldn't figure out what
would be feedback_slots setting when it is not bidirectional, as in
consider the case where there are three nodes A,B,C. Node C is
subscribing to both Node A and Node B. Node A and Node B are the
ones doing concurrent "update" and "delete" which will both be
replicated to Node C. In this case what will be the feedback_slots
setting on Node C? We don't have any slots here which will be
replicating changes from Node C to Node A and Node C to Node B. This
is given in [3] in your first email ([1])

Thanks for pointing this, the link was a bit misleading. I think the
solution proposed in this thread is only used to allow detecting
update_deleted reliably in a bidirectional cluster. For non-
bidirectional cases, it would be more tricky to predict the timing till when

should we retain the dead tuples.

So in brief, this solution is only for bidrectional setup? For non-bidirectional,
feedback_slots is non-configurable and thus irrelevant.

Right.

Irrespective of above, if user ends up setting feedback_slot to some random but
existing slot which is not at all consuming changes, then it may so happen that
the node will never send feedback msg to another node resulting in
accumulation of dead tuples on another node. Is that a possibility?

Yes, It's possible. I think this is a common situation for this kind of user
specified options. Like the user DML will be blocked, if any inactive standby
names are added synchronous_standby_names.

Best Regards,
Hou zj

#12Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#7)
Re: Conflict detection for update_deleted in logical replication

On Wed, Sep 11, 2024 at 8:32 AM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

On Tuesday, September 10, 2024 7:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

One minor comment on 0003
=======================
1.
get_slot_confirmed_flush()
{
...
+ /*
+ * To prevent concurrent slot dropping and creation while filtering the
+ * slots, take the ReplicationSlotControlLock outside of the loop.
+ */
+ LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
+
+ foreach_ptr(String, name, MySubscription->feedback_slots) { XLogRecPtr
+ confirmed_flush; ReplicationSlot *slot;
+
+ slot = ValidateAndGetFeedbackSlot(strVal(name));

Why do we need to validate slots each time here? Isn't it better to do it once?

I think it's possible that the slot was correct but changed or dropped later,
so it could be useful to give a warning in this case to hint user to adjust the
slots, otherwise, the xmin of the publisher's slot won't be advanced and might
cause dead tuples accumulation. This is similar to the checks we performed for
the slots in "synchronized_standby_slots". (E.g. StandbySlotsHaveCaughtup)

In the case of "synchronized_standby_slots", we seem to be invoking
such checks via StandbySlotsHaveCaughtup() when we need to wait for
WAL and also we have some optimizations that avoid the frequent
checking for validation checks. OTOH, this patch doesn't have any such
optimizations. We can optimize it by maintaining a local copy of
feedback slots to avoid looping all the slots each time (if this is
required, we can make it a top-up patch so that it can be reviewed
separately). I have also thought of maintaining the updated value of
confirmed_flush_lsn for feedback slots corresponding to a subscription
in shared memory but that seems tricky because then we have to
maintain slot->subscription mapping. Can you think of any other ways?

Having said that it is better to profile this in various scenarios
like by increasing the frequency of keep_alieve message and or in idle
subscriber cases where we try to send this new feedback message.

--
With Regards,
Amit Kapila.

#13Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#11)
Re: Conflict detection for update_deleted in logical replication

On Wed, Sep 11, 2024 at 11:07 AM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

On Wednesday, September 11, 2024 1:03 PM shveta malik <shveta.malik@gmail.com> wrote:

Another query is about 3 node setup. I couldn't figure out what
would be feedback_slots setting when it is not bidirectional, as in
consider the case where there are three nodes A,B,C. Node C is
subscribing to both Node A and Node B. Node A and Node B are the
ones doing concurrent "update" and "delete" which will both be
replicated to Node C. In this case what will be the feedback_slots
setting on Node C? We don't have any slots here which will be
replicating changes from Node C to Node A and Node C to Node B. This
is given in [3] in your first email ([1])

Thanks for pointing this, the link was a bit misleading. I think the
solution proposed in this thread is only used to allow detecting
update_deleted reliably in a bidirectional cluster. For non-
bidirectional cases, it would be more tricky to predict the timing till when

should we retain the dead tuples.

So in brief, this solution is only for bidrectional setup? For non-bidirectional,
feedback_slots is non-configurable and thus irrelevant.

Right.

One possible idea to address the non-bidirectional case raised by
Shveta is to use a time-based cut-off to remove dead tuples. As
mentioned earlier in my email [1]/messages/by-id/CAA4eK1Lj-PWrP789KnKxZydisHajd38rSihWXO8MVBLDwxG1Kg@mail.gmail.com, we can define a new GUC parameter
say vacuum_committs_age which would indicate that we will allow rows
to be removed only if the modified time of the tuple as indicated by
committs module is greater than the vacuum_committs_age. We could keep
this parameter a table-level option without introducing a GUC as this
may not apply to all tables. I checked and found that some other
replication solutions like GoldenGate also allowed similar parameters
(tombstone_deletes) to be specified at table level [2]BEGIN DBMS_GOLDENGATE_ADM.ALTER_AUTO_CDR( schema_name => 'hr', table_name => 'employees', tombstone_deletes => TRUE); END; /. The other
advantage of allowing it at table level is that it won't hamper the
performance of hot-pruning or vacuum in general. Note, I am careful
here because to decide whether to remove a dead tuple or not we need
to compare its committs_time both during hot-pruning and vacuum.

Note that tombstones_deletes is a general concept used by replication
solutions to detect updated_deleted conflict and time-based purging is
recommended. See [3]https://en.wikipedia.org/wiki/Tombstone_(data_store)[4]https://docs.oracle.com/en/middleware/goldengate/core/19.1/oracle-db/automatic-conflict-detection-and-resolution1.html#GUID-423C6EE8-1C62-4085-899C-8454B8FB9C92. We previously discussed having tombstone
tables to keep the deleted records information but it was suggested to
prevent the vacuum from removing the required dead tuples as that
would be simpler than inventing a new kind of tables/store for
tombstone_deletes [5]/messages/by-id/e4cdb849-d647-4acf-aabe-7049ae170fbf@enterprisedb.com. So, we came up with the idea of feedback slots
discussed in this email but that didn't work out in all cases and
appears difficult to configure as pointed out by Shveta. So, now, we
are back to one of the other ideas [1]/messages/by-id/CAA4eK1Lj-PWrP789KnKxZydisHajd38rSihWXO8MVBLDwxG1Kg@mail.gmail.com discussed previously to solve
this problem.

Thoughts?

[1]: /messages/by-id/CAA4eK1Lj-PWrP789KnKxZydisHajd38rSihWXO8MVBLDwxG1Kg@mail.gmail.com
[2]: BEGIN DBMS_GOLDENGATE_ADM.ALTER_AUTO_CDR( schema_name => 'hr', table_name => 'employees', tombstone_deletes => TRUE); END; /
BEGIN
DBMS_GOLDENGATE_ADM.ALTER_AUTO_CDR(
schema_name => 'hr',
table_name => 'employees',
tombstone_deletes => TRUE);
END;
/
[3]: https://en.wikipedia.org/wiki/Tombstone_(data_store)
[4]: https://docs.oracle.com/en/middleware/goldengate/core/19.1/oracle-db/automatic-conflict-detection-and-resolution1.html#GUID-423C6EE8-1C62-4085-899C-8454B8FB9C92
[5]: /messages/by-id/e4cdb849-d647-4acf-aabe-7049ae170fbf@enterprisedb.com

--
With Regards,
Amit Kapila.

#14shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#13)
Re: Conflict detection for update_deleted in logical replication

On Fri, Sep 13, 2024 at 11:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

So in brief, this solution is only for bidrectional setup? For non-bidirectional,
feedback_slots is non-configurable and thus irrelevant.

Right.

One possible idea to address the non-bidirectional case raised by
Shveta is to use a time-based cut-off to remove dead tuples. As
mentioned earlier in my email [1], we can define a new GUC parameter
say vacuum_committs_age which would indicate that we will allow rows
to be removed only if the modified time of the tuple as indicated by
committs module is greater than the vacuum_committs_age. We could keep
this parameter a table-level option without introducing a GUC as this
may not apply to all tables. I checked and found that some other
replication solutions like GoldenGate also allowed similar parameters
(tombstone_deletes) to be specified at table level [2]. The other
advantage of allowing it at table level is that it won't hamper the
performance of hot-pruning or vacuum in general. Note, I am careful
here because to decide whether to remove a dead tuple or not we need
to compare its committs_time both during hot-pruning and vacuum.

+1 on the idea, but IIUC this value doesn’t need to be significant; it
can be limited to just a few minutes. The one which is sufficient to
handle replication delays caused by network lag or other factors,
assuming clock skew has already been addressed.

This new parameter is necessary only for cases where an UPDATE and
DELETE on the same row occur concurrently, but the replication order
to a third node is not preserved, which could result in data
divergence. Consider the following example:

Node A:
T1: INSERT INTO t (id, value) VALUES (1,1); (10.01 AM)
T2: DELETE FROM t WHERE id = 1; (10.03 AM)

Node B:
T3: UPDATE t SET value = 2 WHERE id = 1; (10.02 AM)

Assume a third node (Node C) subscribes to both Node A and Node B. The
"correct" order of messages received by Node C would be T1-T3-T2, but
it could also receive them in the order T1-T2-T3, wherein sayT3 is
received with a lag of say 2 mins. In such a scenario, T3 should be
able to recognize that the row was deleted by T2 on Node C, thereby
detecting the update-deleted conflict and skipping the apply.

The 'vacuum_committs_age' parameter should account for this lag, which
could lead to the order reversal of UPDATE and DELETE operations.

Any subsequent attempt to update the same row after conflict detection
and resolution should not pose an issue. For example, if Node A
triggers the following at 10:20 AM:
UPDATE t SET value = 3 WHERE id = 1;

Since the row has already been deleted, the UPDATE will not proceed
and therefore will not generate a replication operation on the other
nodes, indicating that vacuum need not to preserve the dead row to
this far.

thanks
Shveta

#15Masahiko Sawada
sawada.mshk@gmail.com
In reply to: shveta malik (#14)
Re: Conflict detection for update_deleted in logical replication

On Fri, Sep 13, 2024 at 12:56 AM shveta malik <shveta.malik@gmail.com> wrote:

On Fri, Sep 13, 2024 at 11:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

So in brief, this solution is only for bidrectional setup? For non-bidirectional,
feedback_slots is non-configurable and thus irrelevant.

Right.

One possible idea to address the non-bidirectional case raised by
Shveta is to use a time-based cut-off to remove dead tuples. As
mentioned earlier in my email [1], we can define a new GUC parameter
say vacuum_committs_age which would indicate that we will allow rows
to be removed only if the modified time of the tuple as indicated by
committs module is greater than the vacuum_committs_age. We could keep
this parameter a table-level option without introducing a GUC as this
may not apply to all tables. I checked and found that some other
replication solutions like GoldenGate also allowed similar parameters
(tombstone_deletes) to be specified at table level [2]. The other
advantage of allowing it at table level is that it won't hamper the
performance of hot-pruning or vacuum in general. Note, I am careful
here because to decide whether to remove a dead tuple or not we need
to compare its committs_time both during hot-pruning and vacuum.

+1 on the idea,

I agree that this idea is much simpler than the idea originally
proposed in this thread.

IIUC vacuum_committs_age specifies a time rather than an XID age. But
how can we implement it? If it ends up affecting the vacuum cutoff, we
should be careful not to end up with the same result of
vacuum_defer_cleanup_age that was discussed before[1]/messages/by-id/20230317230930.nhsgk3qfk7f4axls@awork3.anarazel.de. Also, I think
the implementation needs not to affect the performance of
ComputeXidHorizons().

but IIUC this value doesn’t need to be significant; it
can be limited to just a few minutes. The one which is sufficient to
handle replication delays caused by network lag or other factors,
assuming clock skew has already been addressed.

I think that in a non-bidirectional case the value could need to be a
large number. Is that right?

Regards,

[1]: /messages/by-id/20230317230930.nhsgk3qfk7f4axls@awork3.anarazel.de

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#16Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#15)
Re: Conflict detection for update_deleted in logical replication

On Tue, Sep 17, 2024 at 6:08 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Fri, Sep 13, 2024 at 12:56 AM shveta malik <shveta.malik@gmail.com> wrote:

On Fri, Sep 13, 2024 at 11:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

So in brief, this solution is only for bidrectional setup? For non-bidirectional,
feedback_slots is non-configurable and thus irrelevant.

Right.

One possible idea to address the non-bidirectional case raised by
Shveta is to use a time-based cut-off to remove dead tuples. As
mentioned earlier in my email [1], we can define a new GUC parameter
say vacuum_committs_age which would indicate that we will allow rows
to be removed only if the modified time of the tuple as indicated by
committs module is greater than the vacuum_committs_age. We could keep
this parameter a table-level option without introducing a GUC as this
may not apply to all tables. I checked and found that some other
replication solutions like GoldenGate also allowed similar parameters
(tombstone_deletes) to be specified at table level [2]. The other
advantage of allowing it at table level is that it won't hamper the
performance of hot-pruning or vacuum in general. Note, I am careful
here because to decide whether to remove a dead tuple or not we need
to compare its committs_time both during hot-pruning and vacuum.

+1 on the idea,

I agree that this idea is much simpler than the idea originally
proposed in this thread.

IIUC vacuum_committs_age specifies a time rather than an XID age.

Your understanding is correct that vacuum_committs_age specifies a time.

But
how can we implement it? If it ends up affecting the vacuum cutoff, we
should be careful not to end up with the same result of
vacuum_defer_cleanup_age that was discussed before[1]. Also, I think
the implementation needs not to affect the performance of
ComputeXidHorizons().

I haven't thought about the implementation details yet but I think
during pruning (for example in heap_prune_satisfies_vacuum()), apart
from checking if the tuple satisfies
HeapTupleSatisfiesVacuumHorizon(), we should also check if the tuple's
committs is greater than configured vacuum_committs_age (for the
table) to decide whether tuple can be removed. One thing to consider
is what to do in case of aggressive vacuum where we expect
relfrozenxid to be advanced to FreezeLimit (at a minimum). We may want
to just ignore vacuum_committs_age during aggressive vacuum and LOG if
we end up removing some tuple. This will allow users to retain deleted
tuples by respecting the freeze limits which also avoid xid_wrap
around. I think we can't retain tuples forever if the user
misconfigured vacuum_committs_age and to avoid that we can keep the
maximum limit on this parameter to say an hour or so. Also, users can
tune freeze parameters if they want to retain tuples for longer.

but IIUC this value doesn’t need to be significant; it
can be limited to just a few minutes. The one which is sufficient to
handle replication delays caused by network lag or other factors,
assuming clock skew has already been addressed.

I think that in a non-bidirectional case the value could need to be a
large number. Is that right?

As per my understanding, even for non-bidirectional cases, the value
should be small. For example, in the case, pointed out by Shveta [1]/messages/by-id/CAJpy0uAzzOzhXGH-zBc7Zt8ndXRf6r4OnLzgRrHyf8cvd+fpwg@mail.gmail.com,
where the updates from 2 nodes are received by a third node, this
setting is expected to be small. This setting primarily deals with
concurrent transactions on multiple nodes, so it should be small but I
could be missing something.

[1]: /messages/by-id/CAJpy0uAzzOzhXGH-zBc7Zt8ndXRf6r4OnLzgRrHyf8cvd+fpwg@mail.gmail.com

--
With Regards,
Amit Kapila.

#17Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#16)
Re: Conflict detection for update_deleted in logical replication

On Mon, Sep 16, 2024 at 11:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Sep 17, 2024 at 6:08 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Fri, Sep 13, 2024 at 12:56 AM shveta malik <shveta.malik@gmail.com> wrote:

On Fri, Sep 13, 2024 at 11:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

So in brief, this solution is only for bidrectional setup? For non-bidirectional,
feedback_slots is non-configurable and thus irrelevant.

Right.

One possible idea to address the non-bidirectional case raised by
Shveta is to use a time-based cut-off to remove dead tuples. As
mentioned earlier in my email [1], we can define a new GUC parameter
say vacuum_committs_age which would indicate that we will allow rows
to be removed only if the modified time of the tuple as indicated by
committs module is greater than the vacuum_committs_age. We could keep
this parameter a table-level option without introducing a GUC as this
may not apply to all tables. I checked and found that some other
replication solutions like GoldenGate also allowed similar parameters
(tombstone_deletes) to be specified at table level [2]. The other
advantage of allowing it at table level is that it won't hamper the
performance of hot-pruning or vacuum in general. Note, I am careful
here because to decide whether to remove a dead tuple or not we need
to compare its committs_time both during hot-pruning and vacuum.

+1 on the idea,

I agree that this idea is much simpler than the idea originally
proposed in this thread.

IIUC vacuum_committs_age specifies a time rather than an XID age.

Your understanding is correct that vacuum_committs_age specifies a time.

But
how can we implement it? If it ends up affecting the vacuum cutoff, we
should be careful not to end up with the same result of
vacuum_defer_cleanup_age that was discussed before[1]. Also, I think
the implementation needs not to affect the performance of
ComputeXidHorizons().

I haven't thought about the implementation details yet but I think
during pruning (for example in heap_prune_satisfies_vacuum()), apart
from checking if the tuple satisfies
HeapTupleSatisfiesVacuumHorizon(), we should also check if the tuple's
committs is greater than configured vacuum_committs_age (for the
table) to decide whether tuple can be removed.

Sounds very costly. I think we need to do performance tests. Even if
the vacuum gets slower only on the particular table having the
vacuum_committs_age setting, it would affect overall autovacuum
performance. Also, it would affect HOT pruning performance.

but IIUC this value doesn’t need to be significant; it
can be limited to just a few minutes. The one which is sufficient to
handle replication delays caused by network lag or other factors,
assuming clock skew has already been addressed.

I think that in a non-bidirectional case the value could need to be a
large number. Is that right?

As per my understanding, even for non-bidirectional cases, the value
should be small. For example, in the case, pointed out by Shveta [1],
where the updates from 2 nodes are received by a third node, this
setting is expected to be small. This setting primarily deals with
concurrent transactions on multiple nodes, so it should be small but I
could be missing something.

I might be missing something but the scenario I was thinking of is
something below.

Suppose that we setup uni-directional logical replication between Node
A and Node B (e.g., Node A -> Node B) and both nodes have the same row
with key = 1:

Node A:
T1: UPDATE t SET val = 2 WHERE key = 1; (10:00 AM)
-> This change is applied on Node B at 10:01 AM.

Node B:
T2: DELETE FROM t WHERE key = 1; (05:00 AM)

If a vacuum runs on Node B at 06:00 AM, the change of T1 coming from
Node A would raise an "update_missing" conflict. On the other hand, if
a vacuum runs on Node B at 11:00 AM, the change would raise an
"update_deleted" conflict. It looks whether we detect an
"update_deleted" or an "updated_missing" depends on the timing of
vacuum, and to avoid such a situation, we would need to set
vacuum_committs_age to more than 5 hours.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#18Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#17)
Re: Conflict detection for update_deleted in logical replication

On Tue, Sep 17, 2024 at 11:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Mon, Sep 16, 2024 at 11:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Sep 17, 2024 at 6:08 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

I haven't thought about the implementation details yet but I think
during pruning (for example in heap_prune_satisfies_vacuum()), apart
from checking if the tuple satisfies
HeapTupleSatisfiesVacuumHorizon(), we should also check if the tuple's
committs is greater than configured vacuum_committs_age (for the
table) to decide whether tuple can be removed.

Sounds very costly. I think we need to do performance tests. Even if
the vacuum gets slower only on the particular table having the
vacuum_committs_age setting, it would affect overall autovacuum
performance. Also, it would affect HOT pruning performance.

Agreed that we should do some performance testing and additionally
think of any better way to implement. I think the cost won't be much
if the tuples to be removed are from a single transaction because the
required commit_ts information would be cached but when the tuples are
from different transactions, we could see a noticeable impact. We need
to test to say anything concrete on this.

but IIUC this value doesn’t need to be significant; it
can be limited to just a few minutes. The one which is sufficient to
handle replication delays caused by network lag or other factors,
assuming clock skew has already been addressed.

I think that in a non-bidirectional case the value could need to be a
large number. Is that right?

As per my understanding, even for non-bidirectional cases, the value
should be small. For example, in the case, pointed out by Shveta [1],
where the updates from 2 nodes are received by a third node, this
setting is expected to be small. This setting primarily deals with
concurrent transactions on multiple nodes, so it should be small but I
could be missing something.

I might be missing something but the scenario I was thinking of is
something below.

Suppose that we setup uni-directional logical replication between Node
A and Node B (e.g., Node A -> Node B) and both nodes have the same row
with key = 1:

Node A:
T1: UPDATE t SET val = 2 WHERE key = 1; (10:00 AM)
-> This change is applied on Node B at 10:01 AM.

Node B:
T2: DELETE FROM t WHERE key = 1; (05:00 AM)

If a vacuum runs on Node B at 06:00 AM, the change of T1 coming from
Node A would raise an "update_missing" conflict. On the other hand, if
a vacuum runs on Node B at 11:00 AM, the change would raise an
"update_deleted" conflict. It looks whether we detect an
"update_deleted" or an "updated_missing" depends on the timing of
vacuum, and to avoid such a situation, we would need to set
vacuum_committs_age to more than 5 hours.

Yeah, in this case, it would detect a different conflict (if we don't
set vacuum_committs_age to greater than 5 hours) but as per my
understanding, the primary purpose of conflict detection and
resolution is to avoid data inconsistency in a bi-directional setup.
Assume, in the above case it is a bi-directional setup, then we want
to have the same data in both nodes. Now, if there are other cases
like the one you mentioned that require to detect the conflict
reliably than I agree this value could be large and probably not the
best way to achieve it. I think we can mention in the docs that the
primary purpose of this is to achieve data consistency among
bi-directional kind of setups.

Having said that even in the above case, the result should be the same
whether the vacuum has removed the row or not. Say, if the vacuum has
not yet removed the row (due to vacuum_committs_age or otherwise) then
also because the incoming update has a later timestamp, we will
convert the update to insert as per last_update_wins resolution
method, so the conflict will be considered as update_missing. And,
say, the vacuum has removed the row and the conflict detected is
update_missing, then also we will convert the update to insert. In
short, if UPDATE has lower commit-ts, DELETE should win and if UPDATE
has higher commit-ts, UPDATE should win.

So, we can expect data consistency in bidirectional cases and expect a
deterministic behavior in other cases (e.g. the final data in a table
does not depend on the order of applying the transactions from other
nodes).

--
With Regards,
Amit Kapila.

#19Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#18)
Re: Conflict detection for update_deleted in logical replication

On Tue, Sep 17, 2024 at 9:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Sep 17, 2024 at 11:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Mon, Sep 16, 2024 at 11:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Sep 17, 2024 at 6:08 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

I haven't thought about the implementation details yet but I think
during pruning (for example in heap_prune_satisfies_vacuum()), apart
from checking if the tuple satisfies
HeapTupleSatisfiesVacuumHorizon(), we should also check if the tuple's
committs is greater than configured vacuum_committs_age (for the
table) to decide whether tuple can be removed.

Sounds very costly. I think we need to do performance tests. Even if
the vacuum gets slower only on the particular table having the
vacuum_committs_age setting, it would affect overall autovacuum
performance. Also, it would affect HOT pruning performance.

Agreed that we should do some performance testing and additionally
think of any better way to implement. I think the cost won't be much
if the tuples to be removed are from a single transaction because the
required commit_ts information would be cached but when the tuples are
from different transactions, we could see a noticeable impact. We need
to test to say anything concrete on this.

Agreed.

but IIUC this value doesn’t need to be significant; it
can be limited to just a few minutes. The one which is sufficient to
handle replication delays caused by network lag or other factors,
assuming clock skew has already been addressed.

I think that in a non-bidirectional case the value could need to be a
large number. Is that right?

As per my understanding, even for non-bidirectional cases, the value
should be small. For example, in the case, pointed out by Shveta [1],
where the updates from 2 nodes are received by a third node, this
setting is expected to be small. This setting primarily deals with
concurrent transactions on multiple nodes, so it should be small but I
could be missing something.

I might be missing something but the scenario I was thinking of is
something below.

Suppose that we setup uni-directional logical replication between Node
A and Node B (e.g., Node A -> Node B) and both nodes have the same row
with key = 1:

Node A:
T1: UPDATE t SET val = 2 WHERE key = 1; (10:00 AM)
-> This change is applied on Node B at 10:01 AM.

Node B:
T2: DELETE FROM t WHERE key = 1; (05:00 AM)

If a vacuum runs on Node B at 06:00 AM, the change of T1 coming from
Node A would raise an "update_missing" conflict. On the other hand, if
a vacuum runs on Node B at 11:00 AM, the change would raise an
"update_deleted" conflict. It looks whether we detect an
"update_deleted" or an "updated_missing" depends on the timing of
vacuum, and to avoid such a situation, we would need to set
vacuum_committs_age to more than 5 hours.

Yeah, in this case, it would detect a different conflict (if we don't
set vacuum_committs_age to greater than 5 hours) but as per my
understanding, the primary purpose of conflict detection and
resolution is to avoid data inconsistency in a bi-directional setup.
Assume, in the above case it is a bi-directional setup, then we want
to have the same data in both nodes. Now, if there are other cases
like the one you mentioned that require to detect the conflict
reliably than I agree this value could be large and probably not the
best way to achieve it. I think we can mention in the docs that the
primary purpose of this is to achieve data consistency among
bi-directional kind of setups.

Having said that even in the above case, the result should be the same
whether the vacuum has removed the row or not. Say, if the vacuum has
not yet removed the row (due to vacuum_committs_age or otherwise) then
also because the incoming update has a later timestamp, we will
convert the update to insert as per last_update_wins resolution
method, so the conflict will be considered as update_missing. And,
say, the vacuum has removed the row and the conflict detected is
update_missing, then also we will convert the update to insert. In
short, if UPDATE has lower commit-ts, DELETE should win and if UPDATE
has higher commit-ts, UPDATE should win.

So, we can expect data consistency in bidirectional cases and expect a
deterministic behavior in other cases (e.g. the final data in a table
does not depend on the order of applying the transactions from other
nodes).

Agreed.

I think that such a time-based configuration parameter would be a
reasonable solution. The current concerns are that it might affect
vacuum performance and lead to a similar bug we had with
vacuum_defer_cleanup_age.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

#20Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#19)
RE: Conflict detection for update_deleted in logical replication

-----Original Message-----
From: Masahiko Sawada <sawada.mshk@gmail.com>
Sent: Friday, September 20, 2024 2:49 AM
To: Amit Kapila <amit.kapila16@gmail.com>
Cc: shveta malik <shveta.malik@gmail.com>; Hou, Zhijie/侯 志杰
<houzj.fnst@fujitsu.com>; pgsql-hackers <pgsql-hackers@postgresql.org>
Subject: Re: Conflict detection for update_deleted in logical replication

On Tue, Sep 17, 2024 at 9:29 PM Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Tue, Sep 17, 2024 at 11:24 PM Masahiko Sawada

<sawada.mshk@gmail.com> wrote:

On Mon, Sep 16, 2024 at 11:53 PM Amit Kapila

<amit.kapila16@gmail.com> wrote:

On Tue, Sep 17, 2024 at 6:08 AM Masahiko Sawada

<sawada.mshk@gmail.com> wrote:

I haven't thought about the implementation details yet but I think
during pruning (for example in heap_prune_satisfies_vacuum()),
apart from checking if the tuple satisfies
HeapTupleSatisfiesVacuumHorizon(), we should also check if the
tuple's committs is greater than configured vacuum_committs_age
(for the
table) to decide whether tuple can be removed.

Sounds very costly. I think we need to do performance tests. Even if
the vacuum gets slower only on the particular table having the
vacuum_committs_age setting, it would affect overall autovacuum
performance. Also, it would affect HOT pruning performance.

Agreed that we should do some performance testing and additionally
think of any better way to implement. I think the cost won't be much
if the tuples to be removed are from a single transaction because the
required commit_ts information would be cached but when the tuples are
from different transactions, we could see a noticeable impact. We need
to test to say anything concrete on this.

Agreed.

but IIUC this value doesn’t need to be significant; it can be
limited to just a few minutes. The one which is sufficient to
handle replication delays caused by network lag or other
factors, assuming clock skew has already been addressed.

I think that in a non-bidirectional case the value could need to
be a large number. Is that right?

As per my understanding, even for non-bidirectional cases, the
value should be small. For example, in the case, pointed out by
Shveta [1], where the updates from 2 nodes are received by a third
node, this setting is expected to be small. This setting primarily
deals with concurrent transactions on multiple nodes, so it should
be small but I could be missing something.

I might be missing something but the scenario I was thinking of is
something below.

Suppose that we setup uni-directional logical replication between
Node A and Node B (e.g., Node A -> Node B) and both nodes have the
same row with key = 1:

Node A:
T1: UPDATE t SET val = 2 WHERE key = 1; (10:00 AM)
-> This change is applied on Node B at 10:01 AM.

Node B:
T2: DELETE FROM t WHERE key = 1; (05:00 AM)

If a vacuum runs on Node B at 06:00 AM, the change of T1 coming from
Node A would raise an "update_missing" conflict. On the other hand,
if a vacuum runs on Node B at 11:00 AM, the change would raise an
"update_deleted" conflict. It looks whether we detect an
"update_deleted" or an "updated_missing" depends on the timing of
vacuum, and to avoid such a situation, we would need to set
vacuum_committs_age to more than 5 hours.

Yeah, in this case, it would detect a different conflict (if we don't
set vacuum_committs_age to greater than 5 hours) but as per my
understanding, the primary purpose of conflict detection and
resolution is to avoid data inconsistency in a bi-directional setup.
Assume, in the above case it is a bi-directional setup, then we want
to have the same data in both nodes. Now, if there are other cases
like the one you mentioned that require to detect the conflict
reliably than I agree this value could be large and probably not the
best way to achieve it. I think we can mention in the docs that the
primary purpose of this is to achieve data consistency among
bi-directional kind of setups.

Having said that even in the above case, the result should be the same
whether the vacuum has removed the row or not. Say, if the vacuum has
not yet removed the row (due to vacuum_committs_age or otherwise) then
also because the incoming update has a later timestamp, we will
convert the update to insert as per last_update_wins resolution
method, so the conflict will be considered as update_missing. And,
say, the vacuum has removed the row and the conflict detected is
update_missing, then also we will convert the update to insert. In
short, if UPDATE has lower commit-ts, DELETE should win and if UPDATE
has higher commit-ts, UPDATE should win.

So, we can expect data consistency in bidirectional cases and expect a
deterministic behavior in other cases (e.g. the final data in a table
does not depend on the order of applying the transactions from other
nodes).

Agreed.

I think that such a time-based configuration parameter would be a reasonable
solution. The current concerns are that it might affect vacuum performance and
lead to a similar bug we had with vacuum_defer_cleanup_age.

Thanks for the feedback!

I am working on the POC patch and doing some initial performance tests on this idea.
I will share the results after finishing.

Apart from the vacuum_defer_cleanup_age idea. we’ve given more thought to our
approach for retaining dead tuples and have come up with another idea that can
reliably detect conflicts without requiring users to choose a wise value for
the vacuum_committs_age. This new idea could also reduce the performance
impact. Thanks a lot to Amit for off-list discussion.

The concept of the new idea is that, the dead tuples are only useful to detect
conflicts when applying *concurrent* transactions from remotes. Any subsequent
UPDATE from a remote node after removing the dead tuples should have a later
timestamp, meaning it's reasonable to detect an update_missing scenario and
convert the UPDATE to an INSERT when applying it.

To achieve above, we can create an additional replication slot on the
subscriber side, maintained by the apply worker. This slot is used to retain
the dead tuples. The apply worker will advance the slot.xmin after confirming
that all the concurrent transaction on publisher has been applied locally.

The process of advancing the slot.xmin could be:

1) the apply worker call GetRunningTransactionData() to get the
'oldestRunningXid' and consider this as 'candidate_xmin'.
2) the apply worker send a new message to walsender to request the latest wal
flush position(GetFlushRecPtr) on publisher, and save it to
'candidate_remote_wal_lsn'. Here we could introduce a new feedback message or
extend the existing keepalive message(e,g extends the requestReply bit in
keepalive message to add a 'request_wal_position' value)
3) The apply worker can continue to apply changes. After applying all the WALs
upto 'candidate_remote_wal_lsn', the apply worker can then advance the
slot.xmin to 'candidate_xmin'.

This approach ensures that dead tuples are not removed until all concurrent
transactions have been applied. It can be effective for both bidirectional and
non-bidirectional replication cases.

We could introduce a boolean subscription option (retain_dead_tuples) to
control whether this feature is enabled. Each subscription intending to detect
update-delete conflicts should set retain_dead_tuples to true.

The following explains how it works in different cases to achieve data
consistency:

--
2 nodes, bidirectional case 1:
--
Node A:
T1: INSERT INTO t (id, value) VALUES (1,1); ts=10.00 AM
T2: DELETE FROM t WHERE id = 1; ts=10.02 AM

Node B:
T3: UPDATE t SET value = 2 WHERE id = 1; ts=10.01 AM

subscription retain_dead_tuples = true/false

After executing T2, the apply worker on Node A will check the latest wal flush
location on Node B. Till that time, the T3 should have finished, so the xmin
will be advanced only after applying the WALs that is later than T3. So, the
dead tuple will not be removed before applying the T3, which means the
update_delete can be detected.

--
2 nodes, bidirectional case 2:
--
Node A:
T1: INSERT INTO t (id, value) VALUES (1,1); ts=10.00 AM
T2: DELETE FROM t WHERE id = 1; ts=10.01 AM

Node B:
T3: UPDATE t SET value = 2 WHERE id = 1; ts=10.02 AM

After executing T2, the apply worker on Node A will request the latest wal
flush location on Node B. And the T3 is either running concurrently or has not
started. In both cases, the T3 must have a later timestamp. So, even if the
dead tuple is removed in this cases and update_missing is detected, the default
resolution is to convert UDPATE to INSERT which is OK because the data are
still consistent on Node A and B.

--
3 nodes, non-bidirectional, Node C subscribes to both Node A and Node B:
--

Node A:
T1: INSERT INTO t (id, value) VALUES (1,1); ts=10.00 AM
T2: DELETE FROM t WHERE id = 1; ts=10.01 AM

Node B:
T3: UPDATE t SET value = 2 WHERE id = 1; ts=10.02 AM

Node C:
apply T1, T2, T3

After applying T2, the apply worker on Node C will check the latest wal flush
location on Node B. Till that time, the T3 should have finished, so the xmin
will be advanced only after applying the WALs that is later than T3. So, the
dead tuple will not be removed before applying the T3, which means the
update_delete can be detected.

Your feedback on this idea would be greatly appreciated.

Best Regards,
Hou zj

#21Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#20)
#22Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#20)
#23Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#22)
#24Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#23)
#25Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#23)
#26Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#24)
#27Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#24)
#28Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#27)
#29Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#28)
#30Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#22)
#31Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#21)
#32Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#30)
#33Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#32)
#34Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#33)
#35Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#34)
#36Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Hayato Kuroda (Fujitsu) (#35)
#37Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#36)
#38Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#37)
#39Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#38)
#40Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#36)
#41Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#40)
#42Peter Smith
smithpb2250@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#41)
#43Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#41)
#44Nisha Moond
nisha.moond412@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#41)
#45Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Nisha Moond (#44)
#46Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Peter Smith (#42)
#47Peter Smith
smithpb2250@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#46)
#48Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Michail Nikolaev (#45)
#49Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#48)
#50Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Michail Nikolaev (#49)
#51Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#46)
#52Michail Nikolaev
michail.nikolaev@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#51)
#53Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Michail Nikolaev (#52)
#54Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Peter Smith (#47)
#55Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Hayato Kuroda (Fujitsu) (#51)
#56Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#39)
#57Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#56)
#58Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#57)
#59Nisha Moond
nisha.moond412@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#58)
#60Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Nisha Moond (#59)
#61Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#60)
#62Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#61)
#63Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#62)
#64Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#63)
#65Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#64)
#66Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#64)
#67Amit Kapila
amit.kapila16@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#66)
#68Amit Kapila
amit.kapila16@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#66)
#69Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#68)
#70Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Hayato Kuroda (Fujitsu) (#66)
#71Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#69)
#72Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#71)
#73Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#72)
#74Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#73)
#75Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#74)
#76Nisha Moond
nisha.moond412@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#75)
#77Amit Kapila
amit.kapila16@gmail.com
In reply to: Nisha Moond (#76)
#78Nisha Moond
nisha.moond412@gmail.com
In reply to: Amit Kapila (#77)
#79Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#75)
#80Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#79)
#81Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#80)
#82Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#81)
#83Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#81)
#84Dilip Kumar
dilipbalaut@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#81)
#85Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Dilip Kumar (#84)
#86Dilip Kumar
dilipbalaut@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#85)
#87Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#83)
#88Nisha Moond
nisha.moond412@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#87)
#89Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#87)
#90Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Hayato Kuroda (Fujitsu) (#89)
#91Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#87)
#92Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#90)
#93Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#90)
#94vignesh C
vignesh21@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#90)
#95vignesh C
vignesh21@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#90)
#96Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#90)
#97Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#90)
#98vignesh C
vignesh21@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#90)
#99Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#97)
#100Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#98)
#101Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#94)
#102Amit Kapila
amit.kapila16@gmail.com
In reply to: Nisha Moond (#88)
#103Nisha Moond
nisha.moond412@gmail.com
In reply to: Amit Kapila (#91)
#104Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#97)
#105Nisha Moond
nisha.moond412@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#104)
#106Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#104)
#107Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#106)
#108Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#107)
#109Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#96)
#110Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#108)
#111Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#104)
#112vignesh C
vignesh21@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#90)
#113Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Amit Kapila (#96)
#114Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#96)
#115Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: vignesh C (#95)
#116Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#93)
#117vignesh C
vignesh21@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#114)
#118Nisha Moond
nisha.moond412@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#114)
#119Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#111)
#120Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#119)
#121Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Nisha Moond (#88)
#122Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#121)
#123Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#122)
#124Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#123)
#125vignesh C
vignesh21@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#114)
#126Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#124)
#127Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#126)
#128Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#126)
#129Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#120)
#130Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: vignesh C (#117)
#131Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: vignesh C (#125)
#132Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Nisha Moond (#118)
#133Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#127)
#134Nisha Moond
nisha.moond412@gmail.com
In reply to: Masahiko Sawada (#121)
#135Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#133)
#136Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Masahiko Sawada (#133)
#137Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#133)
#138Nisha Moond
nisha.moond412@gmail.com
In reply to: Amit Kapila (#137)
#139Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#137)
#140Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#139)
#141Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#140)
#142Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#141)
#143Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#142)
#144Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#143)
#145Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#143)
#146Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#101)
#147Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#146)
#148Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#147)
#149Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#142)
#150Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#145)
#151Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#150)
#152Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#145)
#153Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#152)
#154Nisha Moond
nisha.moond412@gmail.com
In reply to: Masahiko Sawada (#123)
#155Nisha Moond
nisha.moond412@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#152)
#156Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#152)
#157Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#156)
#158Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#157)
#159Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#158)
#160Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#159)
#161Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#160)
#162Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#161)
#163Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#161)
#164Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#162)
#165Dilip Kumar
dilipbalaut@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#157)
#166Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Dilip Kumar (#165)
#167Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#164)
#168Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#167)
#169Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#168)
#170Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#168)
#171Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#170)
#172Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#171)
#173Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#168)
#174vignesh C
vignesh21@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#173)
#175Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: vignesh C (#174)
#176Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#175)
#177shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#175)
#178shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#177)
#179Nisha Moond
nisha.moond412@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#175)
#180Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#178)
#181shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#180)
#182shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#180)
#183shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#182)
#184Amit Kapila
amit.kapila16@gmail.com
In reply to: shveta malik (#182)
#185Amit Kapila
amit.kapila16@gmail.com
In reply to: shveta malik (#182)
#186shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#184)
#187shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#185)
#188Amit Kapila
amit.kapila16@gmail.com
In reply to: shveta malik (#181)
#189Amit Kapila
amit.kapila16@gmail.com
In reply to: shveta malik (#187)
#190shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#188)
#191Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#189)
#192Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#184)
#193shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#192)
#194Amit Kapila
amit.kapila16@gmail.com
In reply to: shveta malik (#193)
#195shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#194)
#196shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#193)
#197Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#192)
#198Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#197)
#199Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#196)
#200Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#193)
#201Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#192)
#202shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#201)
#203Amit Kapila
amit.kapila16@gmail.com
In reply to: shveta malik (#202)
#204Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#198)
#205Xuneng Zhou
xunengzhou@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#198)
#206Dilip Kumar
dilipbalaut@gmail.com
In reply to: Xuneng Zhou (#205)
#207Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#206)
#208Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#207)
#209Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#208)
#210Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#209)
#211Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#210)
#212Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#204)
#213Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#202)
#214Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Dilip Kumar (#211)
#215Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Dilip Kumar (#209)
#216Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Xuneng Zhou (#205)
#217Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#214)
#218shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#212)
#219Amit Kapila
amit.kapila16@gmail.com
In reply to: shveta malik (#218)
#220Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#217)
#221shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#218)
#222Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#212)
#223shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#222)
#224Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#222)
#225Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#224)
#226shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#224)
#227Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#226)
#228Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#224)
#229Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#228)
#230Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#229)
#231shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#229)
#232Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#231)
#233shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#232)
#234shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#233)
#235Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#234)
#236Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#232)
#237shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#235)
#238Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#236)
#239Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#238)
#240shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#238)
#241shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#240)
#242Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#241)
#243Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#242)
#244shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#242)
#245Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#243)
#246shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#245)
#247Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#246)
#248shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#247)
#249Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#247)
#250Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#248)
#251Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#249)
#252shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#250)
#253Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#252)
#254shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#253)
#255Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#253)
#256Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#255)
#257Dilip Kumar
dilipbalaut@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#256)
#258Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#257)
#259Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#258)
#260Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#259)
#261Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Dilip Kumar (#260)
#262Dilip Kumar
dilipbalaut@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#261)
#263Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#261)
#264Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#263)
#265Dilip Kumar
dilipbalaut@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#264)
#266Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#265)
#267Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#266)
#268Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#267)
#269Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#268)
#270Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#264)
#271Dilip Kumar
dilipbalaut@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#270)
#272Dilip Kumar
dilipbalaut@gmail.com
In reply to: Dilip Kumar (#271)
#273Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#270)
#274Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#270)
#275Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#273)
#276Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#274)
#277Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#275)
#278Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Masahiko Sawada (#275)
#279Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#277)
#280Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#276)
#281Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#277)
#282Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Hayato Kuroda (Fujitsu) (#278)
#283Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#281)
#284Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Hayato Kuroda (Fujitsu) (#273)
#285Nisha Moond
nisha.moond412@gmail.com
In reply to: Amit Kapila (#283)
#286Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Amit Kapila (#283)
#287Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Hayato Kuroda (Fujitsu) (#286)
#288Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#283)
#289Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#288)
#290Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#289)
#291shveta malik
shveta.malik@gmail.com
In reply to: Dilip Kumar (#290)
#292Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#289)
#293Dilip Kumar
dilipbalaut@gmail.com
In reply to: shveta malik (#291)
#294Nisha Moond
nisha.moond412@gmail.com
In reply to: shveta malik (#291)
#295Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#292)
#296Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#280)
#297Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#295)
#298Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#297)
#299Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#297)
#300Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#298)
#301Nisha Moond
nisha.moond412@gmail.com
In reply to: Amit Kapila (#300)
#302Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#300)
#303Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#302)
#304Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#303)
#305Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#304)
#306Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#305)
#307shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#306)
#308Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#306)
#309shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#307)
#310Amit Kapila
amit.kapila16@gmail.com
In reply to: shveta malik (#309)
#311Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#308)
#312Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#307)
#313Nisha Moond
nisha.moond412@gmail.com
In reply to: Amit Kapila (#310)
#314shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#310)
#315Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#311)
#316shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#311)
#317shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#316)
#318Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#317)
#319Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#315)
#320shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#318)
#321Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#318)
#322Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#321)
#323Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#320)
#324shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#323)
#325Dilip Kumar
dilipbalaut@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#322)
#326Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#325)
#327Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#326)
#328Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Dilip Kumar (#325)
#329Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#327)
#330Dilip Kumar
dilipbalaut@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#328)
#331Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Dilip Kumar (#330)
#332Dilip Kumar
dilipbalaut@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#331)
#333shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#331)
#334Amit Kapila
amit.kapila16@gmail.com
In reply to: shveta malik (#333)
#335Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#333)
#336Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#335)
#337Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#334)
#338shveta malik
shveta.malik@gmail.com
In reply to: Dilip Kumar (#337)
#339shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#336)
#340Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#339)
#341Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Dilip Kumar (#337)
#342shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#341)
#343Amit Kapila
amit.kapila16@gmail.com
In reply to: shveta malik (#342)
#344Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#343)
#345Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Dilip Kumar (#344)
#346Dilip Kumar
dilipbalaut@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#345)
#347shveta malik
shveta.malik@gmail.com
In reply to: Dilip Kumar (#346)
#348Nisha Moond
nisha.moond412@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#341)
#349Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#342)
#350shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#349)
#351Amit Kapila
amit.kapila16@gmail.com
In reply to: shveta malik (#350)
#352shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#351)
#353shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#350)
#354shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#353)
#355Amit Kapila
amit.kapila16@gmail.com
In reply to: shveta malik (#352)
#356Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#353)
#357Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#356)
#358Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#357)
#359Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#357)
#360Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#359)
#361Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Dilip Kumar (#360)
#362Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#361)
#363Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#359)
#364Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#362)
#365shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#364)
#366Nisha Moond
nisha.moond412@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#364)
#367Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Nisha Moond (#366)
#368Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#365)
#369Mihail Nikalayeu
mihailnikalayeu@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#368)
#370shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#368)
#371Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#367)
#372Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#334)
#373shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#372)
#374Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#371)
#375shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#374)
#376shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#375)
#377Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#374)
#378Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#377)
#379Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#378)
#380shveta malik
shveta.malik@gmail.com
In reply to: Dilip Kumar (#379)
#381Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Dilip Kumar (#379)
#382Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#378)
#383Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#380)
#384Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#383)
#385shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#384)
#386Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#384)
#387Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#386)
#388Nisha Moond
nisha.moond412@gmail.com
In reply to: Amit Kapila (#387)
#389Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#386)
#390Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#385)
#391shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#389)
#392Nisha Moond
nisha.moond412@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#389)
#393Nisha Moond
nisha.moond412@gmail.com
In reply to: Nisha Moond (#392)
#394Nisha Moond
nisha.moond412@gmail.com
In reply to: Nisha Moond (#388)
#395Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#391)
#396Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Nisha Moond (#392)
#397Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#395)
#398shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#397)
#399shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#398)
#400shveta malik
shveta.malik@gmail.com
In reply to: shveta malik (#399)
#401Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#400)
#402shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#401)
#403Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#402)
#404Tom Lane
tgl@sss.pgh.pa.us
In reply to: Zhijie Hou (Fujitsu) (#403)
#405Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#403)
#406shveta malik
shveta.malik@gmail.com
In reply to: Tom Lane (#404)
#407Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#405)
#408Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#399)
#409Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#400)
#410shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#408)
#411Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#408)
#412Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#411)
#413Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#410)
#414Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#412)
#415Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#407)
#416shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#415)
#417Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#414)
#418Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Dilip Kumar (#417)
#419Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Masahiko Sawada (#418)
#420Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#419)
#421shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#419)
#422Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#421)
#423Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#420)
#424Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#422)
#425shveta malik
shveta.malik@gmail.com
In reply to: Amit Kapila (#424)
#426Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: shveta malik (#425)
#427Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#424)
#428Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#427)
#429Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#428)
#430Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Zhijie Hou (Fujitsu) (#429)
#431shveta malik
shveta.malik@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#430)
#432Amit Kapila
amit.kapila16@gmail.com
In reply to: shveta malik (#431)