Re: [HACKERS] logical decoding of two-phase transactions

Started by Nikhil Sontakkeover 7 years ago424 messages
#1Nikhil Sontakke
Nikhil Sontakke
nikhils@2ndquadrant.com
4 attachment(s)

Hi,

PFA, latest patchset which incorporates the additional feedback.

There's an additional test case in
0005-Additional-test-case-to-demonstrate-decoding-rollbac.patch which
uses a sleep in the "change" plugin API to allow a concurrent rollback
on the 2PC being currently decoded. Andres generally doesn't like this
approach :-), but there are no timing/interlocking issues now, and the
sleep just helps us do a concurrent rollback, so it might be ok now,
all things considered. Anyways, it's an additional patch for now.

Yea, I still don't think it's ok. The tests won't be reliable. There's
ways to make this reliable, e.g. by forcing a lock to be acquired that's
externally held or such. Might even be doable just with a weird custom
datatype.

Ok, I will look at ways to do away with the sleep.

The attached patchset implements a non-sleep based approached by
sending the 2PC XID to the pg_logical_slot_get_changes() function as
an option for the test_decoding plugin. So, an example invocation
will now look like:

SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,
NULL, 'skip-empty-xacts', '1', 'check-xid', '$xid2pc');

The test_decoding pg_decode_change() API if it sees a valid xid
argument will wait for it to be aborted. Another backend can then come
in and merrily abort this ongoing 2PC in the background. Once it's
aborted, the pg_decode_change API will go ahead and will hit an ERROR
in the systable scan APIs. That should take care of Andres' concern
about using sleep in the tests. The relevant tap test has been added
to this patchset.

@@ -423,6 +423,16 @@ systable_getnext(SysScanDesc sysscan)
else
htup = heap_getnext(sysscan->scan, ForwardScanDirection);

+     /*
+      * If CheckXidAlive is valid, then we check if it aborted. If it did, we
+      * error out
+      */
+     if (TransactionIdIsValid(CheckXidAlive) &&
+                     TransactionIdDidAbort(CheckXidAlive))
+                     ereport(ERROR,
+                             (errcode(ERRCODE_TRANSACTION_ROLLBACK),
+                              errmsg("transaction aborted during system catalog scan")));
+
return htup;
}

Don't we have to check TransactionIdIsInProgress() first? C.f. header
comments in tqual.c. Note this is also not guaranteed to be correct
after a crash (where no clog entry will exist for an aborted xact), but
we probably shouldn't get here in that case - but better be safe.

I suspect it'd be better reformulated as
TransactionIdIsValid(CheckXidAlive) &&
!TransactionIdIsInProgress(CheckXidAlive) &&
!TransactionIdDidCommit(CheckXidAlive)

What do you think?

Modified the checks are per the above suggestion.

I was wondering if anything else would be needed for user-defined
catalog tables..

We don't need to do anything else for user-defined catalog tables
since they will also get accessed via the systable_* scan APIs.

Hmm, lemme see if we can do it outside of the plugin. But note that a
plugin might want to decode some 2PC at prepare time and another are
"commit prepared" time.

The test_decoding pg_decode_filter_prepare() API implements a simple
filter strategy now. If the GID contains a substring "nodecode", then
it filters out decoding of such a 2PC at prepare time. Have added
steps to test this in the relevant test case in this patch.

I believe this patchset handles all pending issues along with relevant
test cases. Comments, further feedback appreciated.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0001-Cleaning-up-of-flags-in-ReorderBufferTXN-structure.patchapplication/octet-stream; name=0001-Cleaning-up-of-flags-in-ReorderBufferTXN-structure.patch
0002-Support-decoding-of-two-phase-transactions-at-PREPAR.patchapplication/octet-stream; name=0002-Support-decoding-of-two-phase-transactions-at-PREPAR.patch
0003-Gracefully-handle-concurrent-aborts-of-uncommitted-t.patchapplication/octet-stream; name=0003-Gracefully-handle-concurrent-aborts-of-uncommitted-t.patch
0004-Teach-test_decoding-plugin-to-work-with-2PC.patchapplication/octet-stream; name=0004-Teach-test_decoding-plugin-to-work-with-2PC.patch
#2Petr Jelinek
Petr Jelinek
petr.jelinek@2ndquadrant.com
In reply to: Nikhil Sontakke (#1)

On 01/08/18 16:00, Nikhil Sontakke wrote:

I was wondering if anything else would be needed for user-defined
catalog tables..

We don't need to do anything else for user-defined catalog tables
since they will also get accessed via the systable_* scan APIs.

They can be, but currently they might not be. So this requires at least
big fat warning in docs and description on how to access user catalogs
from plugins correctly (ie to always use systable_* API on them). It
would be nice if we could check for it in Assert builds at least.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#3Andres Freund
Andres Freund
andres@anarazel.de
In reply to: Petr Jelinek (#2)

On 2018-08-01 21:55:18 +0200, Petr Jelinek wrote:

On 01/08/18 16:00, Nikhil Sontakke wrote:

I was wondering if anything else would be needed for user-defined
catalog tables..

We don't need to do anything else for user-defined catalog tables
since they will also get accessed via the systable_* scan APIs.

They can be, but currently they might not be. So this requires at least
big fat warning in docs and description on how to access user catalogs
from plugins correctly (ie to always use systable_* API on them). It
would be nice if we could check for it in Assert builds at least.

Yea, I agree. I think we should just consider putting similar checks in
the general scan APIs. With an unlikely() and the easy predictability of
these checks, I think we should be fine, overhead-wise.

Greetings,

Andres Freund

#4Nikhil Sontakke
Nikhil Sontakke
nikhils@2ndquadrant.com
In reply to: Andres Freund (#3)
4 attachment(s)

They can be, but currently they might not be. So this requires at least
big fat warning in docs and description on how to access user catalogs
from plugins correctly (ie to always use systable_* API on them). It
would be nice if we could check for it in Assert builds at least.

Ok, modified the sgml documentation for the above.

Yea, I agree. I think we should just consider putting similar checks in
the general scan APIs. With an unlikely() and the easy predictability of
these checks, I think we should be fine, overhead-wise.

Ok, added unlikely() checks in the heap_* scan APIs.

Revised patchset attached.

Regards,
Nikhils

Greetings,

Andres Freund

--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0003-Gracefully-handle-concurrent-aborts-of-uncommitted-t.patchapplication/octet-stream; name=0003-Gracefully-handle-concurrent-aborts-of-uncommitted-t.patch
0004-Teach-test_decoding-plugin-to-work-with-2PC.patchapplication/octet-stream; name=0004-Teach-test_decoding-plugin-to-work-with-2PC.patch
0001-Cleaning-up-of-flags-in-ReorderBufferTXN-structure.patchapplication/octet-stream; name=0001-Cleaning-up-of-flags-in-ReorderBufferTXN-structure.patch
0002-Support-decoding-of-two-phase-transactions-at-PREPAR.patchapplication/octet-stream; name=0002-Support-decoding-of-two-phase-transactions-at-PREPAR.patch
#5Arseny Sher
Arseny Sher
a.sher@postgrespro.ru
In reply to: Nikhil Sontakke (#4)

Hello,

I have looked through the patches. I will first describe relativaly
serious issues I see and then proceed with small nitpicking.

- On decoding of aborted xacts. The idea to throw an error once we
detect the abort is appealing, however I think you will have problems
with subxacts in the current implementation. What if subxact issues
DDL and then aborted, but main transaction successfully committed?

- Decoding transactions at PREPARE record changes rules of the "we ship
all commits after lsn 'x'" game. Namely, it will break initial
tablesync: what if consistent snapshot was formed *after* PREPARE, but
before COMMIT PREPARED, and the plugin decides to employ 2pc? Instead
of getting inital contents + continious stream of changes the receiver
will miss the prepared xact contents and raise 'prepared xact doesn't
exist' error. I think the starting point to address this is to forbid
two-phase decoding of xacts with lsn of PREPARE less than
snapbuilder's start_decoding_at.

- Currently we will call abort_prepared cb even if we failed to actually
prepare xact due to concurrent abort. I think it is confusing for
users. We should either handle this by remembering not to invoke
abort_prepared in these cases or at least document this behaviour,
leaving this problem to the receiver side.

- I find it suspicious that DecodePrepare completely ignores actions of
SnapBuildCommitTxn. For example, to execute invalidations, the latter
sets base snapshot if our xact (or subxacts) did DDL and the snapshot
not set yet. My fantasy doesn't hint me the concrete example
where this would burn at the moment, but it should be considered.

Now, the bikeshedding.

First patch:
- I am one of those people upthread who don't think that converting
flags to bitmask is beneficial -- especially given that many of them
are mutually exclusive, e.g. xact can't be committed and aborted at
the same time. Apparently you have left this to the committer though.

Second patch:
- Applying gives me
Applying: Support decoding of two-phase transactions at PREPARE
.git/rebase-apply/patch:871: trailing whitespace.

+      row. The <function>change_cb</function> callback may access system or
+      user catalog tables to aid in the process of outputting the row
+      modification details. In case of decoding a prepared (but yet
+      uncommitted) transaction or decoding of an uncommitted transaction, this
+      change callback is ensured sane access to catalog tables regardless of
+      simultaneous rollback by another backend of this very same transaction.

I don't think we should explain this, at least in such words. As
mentioned upthread, we should warn about allowed systable_* accesses
instead. Same for message_cb.

+	/*
+	 * Tell the reorderbuffer about the surviving subtransactions. We need to
+	 * do this because the main transaction itself has not committed since we
+	 * are in the prepare phase right now. So we need to be sure the snapshot
+	 * is setup correctly for the main transaction in case all changes
+	 * happened in subtransanctions
+	 */

While we do certainly need to associate subxacts here, the explanation
looks weird to me. I would leave just the 'Tell the reorderbuffer about
the surviving subtransactions' as in DecodeCommit.

}
-
/*
* There's a speculative insertion remaining, just clean in up, it
* can't have been successful, otherwise we'd gotten a confirmation

Spurious newline deletion.

- I would rename ReorderBufferCommitInternal to ReorderBufferReplay:
we replay the xact there, not commit.

- If xact is empty, we will not prepare it (and call cb),
even if the output plugin asked us. However, we will call
commit_prepared cb.

- ReorderBufferTxnIsPrepared and ReorderBufferPrepareNeedSkip do the
same and should be merged with comments explaining that the answer
must be stable.

- filter_prepare_cb callback existence is checked in both decode.c and
in filter_prepare_cb_wrapper.

+	/*
+	 * The transaction may or may not exist (during restarts for example).
+	 * Anyways, 2PC transactions do not contain any reorderbuffers. So allow
+	 * it to be created below.
+	 */

Code around looks sane, but I think that ReorderBufferTXN for our xact
must *not* exist at this moment: if we are going to COMMIT/ABORT
PREPARED it, it must have been replayed and RBTXN purged immediately
after. Also, instead of misty '2PC transactions do not contain any
reorderbuffers' I would say something like 'create dummy
ReorderBufferTXN to pass it to the callback'.

- In DecodeAbort:
+	/*
+	 * If it's ROLLBACK PREPARED then handle it via callbacks.
+	 */
+	if (TransactionIdIsValid(xid) &&
+		!SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) &&
+

How xid can be invalid here?

- It might be worthwile to put the check
+		!SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) &&
+		parsed->dbId == ctx->slot->data.database &&
+		!FilterByOrigin(ctx, origin_id) &&

which appears 3 times now into separate function.

+	 * two-phase transactions - we either have to have all of them or none.
+	 * The filter_prepare callback is optional, but can only be defined when

Kind of controversial (all of them or none, but optional), might be
formulated more accurately.

+	/*
+	 * Capabilities of the output plugin.
+	 */
+	bool        enable_twophase;

I would rename this to 'supports_twophase' since this is not an option
but a description of the plugin capabilities.

+	/* filter_prepare is optional, but requires two-phase decoding */
+	if ((ctx->callbacks.filter_prepare_cb != NULL) && (!ctx->enable_twophase))
+		ereport(ERROR,
+				(errmsg("Output plugin does not support two-phase decoding, but "
+						"registered
filter_prepared callback.")));

Don't think we need to check that...

+		 * Otherwise call either PREPARE (for twophase transactions) or COMMIT
+		 * (for regular ones).
+		 */
+		if (rbtxn_rollback(txn))
+			rb->abort(rb, txn, commit_lsn);

This is the dead code since we don't have decoding of in-progress xacts
yet.

Third patch:
+/*
+ * An xid value pointing to a possibly ongoing or a prepared transaction.
+ * Currently used in logical decoding.  It's possible that such transactions
+ * can get aborted while the decoding is ongoing.
+ */

I would explain here that this xid is checked for abort after each
catalog scan, and sent for the details to SetupHistoricSnapshot.

+	/*
+	 * If CheckXidAlive is valid, then we check if it aborted. If it did, we
+	 * error out
+	 */
+	if (TransactionIdIsValid(CheckXidAlive) &&
+			!TransactionIdIsInProgress(CheckXidAlive) &&
+			!TransactionIdDidCommit(CheckXidAlive))
+			ereport(ERROR,
+				(errcode(ERRCODE_TRANSACTION_ROLLBACK),
+				 errmsg("transaction aborted during system catalog scan")));

Probably centralize checks in one function? As well as 'We don't expect
direct calls to heap_fetch...' ones.

P.S. Looks like you have torn the thread chain: In-Reply-To header of
mail [1]/messages/by-id/CAMGcDxeqEpWj3fTXwqhSwBdXd2RS9jzwWscO-XbeCfso6ts3+Q@mail.gmail.com is missing. Please don't do that.

[1]: /messages/by-id/CAMGcDxeqEpWj3fTXwqhSwBdXd2RS9jzwWscO-XbeCfso6ts3+Q@mail.gmail.com

--
Arseny Sher
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#6Andres Freund
Andres Freund
andres@anarazel.de
In reply to: Arseny Sher (#5)

On 2018-08-06 21:06:13 +0300, Arseny Sher wrote:

Hello,

I have looked through the patches. I will first describe relativaly
serious issues I see and then proceed with small nitpicking.

- On decoding of aborted xacts. The idea to throw an error once we
detect the abort is appealing, however I think you will have problems
with subxacts in the current implementation. What if subxact issues
DDL and then aborted, but main transaction successfully committed?

I don't see a fundamental issue here. I've not reviewed the current
patchset meaningfully, however. Do you see a fundamental issue here?

- Decoding transactions at PREPARE record changes rules of the "we ship
all commits after lsn 'x'" game. Namely, it will break initial
tablesync: what if consistent snapshot was formed *after* PREPARE, but
before COMMIT PREPARED, and the plugin decides to employ 2pc? Instead
of getting inital contents + continious stream of changes the receiver
will miss the prepared xact contents and raise 'prepared xact doesn't
exist' error. I think the starting point to address this is to forbid
two-phase decoding of xacts with lsn of PREPARE less than
snapbuilder's start_decoding_at.

Yea, that sounds like it need to be addressed.

- Currently we will call abort_prepared cb even if we failed to actually
prepare xact due to concurrent abort. I think it is confusing for
users. We should either handle this by remembering not to invoke
abort_prepared in these cases or at least document this behaviour,
leaving this problem to the receiver side.

What precisely do you mean by "concurrent abort"?

- I find it suspicious that DecodePrepare completely ignores actions of
SnapBuildCommitTxn. For example, to execute invalidations, the latter
sets base snapshot if our xact (or subxacts) did DDL and the snapshot
not set yet. My fantasy doesn't hint me the concrete example
where this would burn at the moment, but it should be considered.

Yea, I think this need to mirror the actions (and thus generalize the
code to avoid duplication)

Now, the bikeshedding.

First patch:
- I am one of those people upthread who don't think that converting
flags to bitmask is beneficial -- especially given that many of them
are mutually exclusive, e.g. xact can't be committed and aborted at
the same time. Apparently you have left this to the committer though.

Similar.

- Andres

#7Arseny Sher
Arseny Sher
a.sher@postgrespro.ru
In reply to: Andres Freund (#6)

Andres Freund <andres@anarazel.de> writes:

- On decoding of aborted xacts. The idea to throw an error once we
detect the abort is appealing, however I think you will have problems
with subxacts in the current implementation. What if subxact issues
DDL and then aborted, but main transaction successfully committed?

I don't see a fundamental issue here. I've not reviewed the current
patchset meaningfully, however. Do you see a fundamental issue here?

Hmm, yes, this is not an issue for this patch because after reading
PREPARE record we know all aborted subxacts and won't try to decode
their changes. However, this will be raised once we decide to decode
in-progress transactions. Checking for all subxids is expensive;
moreover, WAL doesn't provide all of them until commit... it might be
easier to prevent vacuuming of aborted stuff while decoding needs it.
Matter for another patch, anyway.

- Currently we will call abort_prepared cb even if we failed to actually
prepare xact due to concurrent abort. I think it is confusing for
users. We should either handle this by remembering not to invoke
abort_prepared in these cases or at least document this behaviour,
leaving this problem to the receiver side.

What precisely do you mean by "concurrent abort"?

With current patch, the following is possible:
* We start decoding of some prepared xact;
* Xact aborts (ABORT PREPARED) for any reason;
* Decoding processs notices this on catalog scan and calls abort()
callback;
* Later decoding process reads abort record and calls abort_prepared
callback.

--
Arseny Sher
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#8Nikhil Sontakke
Nikhil Sontakke
nikhils@2ndquadrant.com
In reply to: Arseny Sher (#5)

Hi Arseny,

- Decoding transactions at PREPARE record changes rules of the "we ship
all commits after lsn 'x'" game. Namely, it will break initial
tablesync: what if consistent snapshot was formed *after* PREPARE, but
before COMMIT PREPARED, and the plugin decides to employ 2pc? Instead
of getting inital contents + continious stream of changes the receiver
will miss the prepared xact contents and raise 'prepared xact doesn't
exist' error. I think the starting point to address this is to forbid
two-phase decoding of xacts with lsn of PREPARE less than
snapbuilder's start_decoding_at.

It will be the job of the plugin to return a consistent answer for
every GID that is encountered. In this case, the plugin will decode
the transaction at COMMIT PREPARED time and not at PREPARE time.

- Currently we will call abort_prepared cb even if we failed to actually
prepare xact due to concurrent abort. I think it is confusing for
users. We should either handle this by remembering not to invoke
abort_prepared in these cases or at least document this behaviour,
leaving this problem to the receiver side.

The point is, when we reach the "ROLLBACK PREPARED", we have no idea
if the "PREPARE" was aborted by this rollback happening concurrently.
So it's possible that the 2PC has been successfully decoded and we
would have to send the rollback to the other side which would need to
check if it needs to rollback locally.

- I find it suspicious that DecodePrepare completely ignores actions of
SnapBuildCommitTxn. For example, to execute invalidations, the latter
sets base snapshot if our xact (or subxacts) did DDL and the snapshot
not set yet. My fantasy doesn't hint me the concrete example
where this would burn at the moment, but it should be considered.

I had discussed this area with Petr and we didn't see any issues as well, then.

Now, the bikeshedding.

First patch:
- I am one of those people upthread who don't think that converting
flags to bitmask is beneficial -- especially given that many of them
are mutually exclusive, e.g. xact can't be committed and aborted at
the same time. Apparently you have left this to the committer though.

Hmm, there seems to be divided opinion on this. I am willing to go
back to using the booleans if there's opposition and if the committer
so wishes. Note that this patch will end up adding 4/5 more booleans
in that case (we add new ones for prepare, commit prepare, abort,
rollback prepare etc).

Second patch:
- Applying gives me
Applying: Support decoding of two-phase transactions at PREPARE
.git/rebase-apply/patch:871: trailing whitespace.

+      row. The <function>change_cb</function> callback may access system or
+      user catalog tables to aid in the process of outputting the row
+      modification details. In case of decoding a prepared (but yet
+      uncommitted) transaction or decoding of an uncommitted transaction, this
+      change callback is ensured sane access to catalog tables regardless of
+      simultaneous rollback by another backend of this very same transaction.

I don't think we should explain this, at least in such words. As
mentioned upthread, we should warn about allowed systable_* accesses
instead. Same for message_cb.

Looks like you are looking at an earlier patchset. The latest patchset
has removed the above.

+       /*
+        * Tell the reorderbuffer about the surviving subtransactions. We need to
+        * do this because the main transaction itself has not committed since we
+        * are in the prepare phase right now. So we need to be sure the snapshot
+        * is setup correctly for the main transaction in case all changes
+        * happened in subtransanctions
+        */

While we do certainly need to associate subxacts here, the explanation
looks weird to me. I would leave just the 'Tell the reorderbuffer about
the surviving subtransactions' as in DecodeCommit.

}
-
/*
* There's a speculative insertion remaining, just clean in up, it
* can't have been successful, otherwise we'd gotten a confirmation

Spurious newline deletion.

- I would rename ReorderBufferCommitInternal to ReorderBufferReplay:
we replay the xact there, not commit.

- If xact is empty, we will not prepare it (and call cb),
even if the output plugin asked us. However, we will call
commit_prepared cb.

- ReorderBufferTxnIsPrepared and ReorderBufferPrepareNeedSkip do the
same and should be merged with comments explaining that the answer
must be stable.

- filter_prepare_cb callback existence is checked in both decode.c and
in filter_prepare_cb_wrapper.

+       /*
+        * The transaction may or may not exist (during restarts for example).
+        * Anyways, 2PC transactions do not contain any reorderbuffers. So allow
+        * it to be created below.
+        */

Code around looks sane, but I think that ReorderBufferTXN for our xact
must *not* exist at this moment: if we are going to COMMIT/ABORT
PREPARED it, it must have been replayed and RBTXN purged immediately
after. Also, instead of misty '2PC transactions do not contain any
reorderbuffers' I would say something like 'create dummy
ReorderBufferTXN to pass it to the callback'.

- In DecodeAbort:
+       /*
+        * If it's ROLLBACK PREPARED then handle it via callbacks.
+        */
+       if (TransactionIdIsValid(xid) &&
+               !SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) &&
+

How xid can be invalid here?

- It might be worthwile to put the check
+               !SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) &&
+               parsed->dbId == ctx->slot->data.database &&
+               !FilterByOrigin(ctx, origin_id) &&

which appears 3 times now into separate function.

+        * two-phase transactions - we either have to have all of them or none.
+        * The filter_prepare callback is optional, but can only be defined when

Kind of controversial (all of them or none, but optional), might be
formulated more accurately.

+       /*
+        * Capabilities of the output plugin.
+        */
+       bool        enable_twophase;

I would rename this to 'supports_twophase' since this is not an option
but a description of the plugin capabilities.

+       /* filter_prepare is optional, but requires two-phase decoding */
+       if ((ctx->callbacks.filter_prepare_cb != NULL) && (!ctx->enable_twophase))
+               ereport(ERROR,
+                               (errmsg("Output plugin does not support two-phase decoding, but "
+                                               "registered
filter_prepared callback.")));

Don't think we need to check that...

+                * Otherwise call either PREPARE (for twophase transactions) or COMMIT
+                * (for regular ones).
+                */
+               if (rbtxn_rollback(txn))
+                       rb->abort(rb, txn, commit_lsn);

This is the dead code since we don't have decoding of in-progress xacts
yet.

Yes, the above check can be done away with it.

Third patch:
+/*
+ * An xid value pointing to a possibly ongoing or a prepared transaction.
+ * Currently used in logical decoding.  It's possible that such transactions
+ * can get aborted while the decoding is ongoing.
+ */

I would explain here that this xid is checked for abort after each
catalog scan, and sent for the details to SetupHistoricSnapshot.

+       /*
+        * If CheckXidAlive is valid, then we check if it aborted. If it did, we
+        * error out
+        */
+       if (TransactionIdIsValid(CheckXidAlive) &&
+                       !TransactionIdIsInProgress(CheckXidAlive) &&
+                       !TransactionIdDidCommit(CheckXidAlive))
+                       ereport(ERROR,
+                               (errcode(ERRCODE_TRANSACTION_ROLLBACK),
+                                errmsg("transaction aborted during system catalog scan")));

Probably centralize checks in one function? As well as 'We don't expect
direct calls to heap_fetch...' ones.

P.S. Looks like you have torn the thread chain: In-Reply-To header of
mail [1] is missing. Please don't do that.

That wasn't me. I was also annoyed and surprised to see a new email
thread separate from the earlier containing 100 or so messages.

Regards,
Nikhils
--
Nikhil Sontakke http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#9Arseny Sher
Arseny Sher
a.sher@postgrespro.ru
In reply to: Nikhil Sontakke (#8)

Nikhil Sontakke <nikhils@2ndquadrant.com> writes:

- Decoding transactions at PREPARE record changes rules of the "we ship
all commits after lsn 'x'" game. Namely, it will break initial
tablesync: what if consistent snapshot was formed *after* PREPARE, but
before COMMIT PREPARED, and the plugin decides to employ 2pc? Instead
of getting inital contents + continious stream of changes the receiver
will miss the prepared xact contents and raise 'prepared xact doesn't
exist' error. I think the starting point to address this is to forbid
two-phase decoding of xacts with lsn of PREPARE less than
snapbuilder's start_decoding_at.

It will be the job of the plugin to return a consistent answer for
every GID that is encountered. In this case, the plugin will decode
the transaction at COMMIT PREPARED time and not at PREPARE time.

I can't imagine a scenario in which plugin would want to send COMMIT
PREPARED instead of replaying xact fully on CP record given it had never
seen PREPARE record. On the other hand, tracking such situations on
plugins side would make plugins life unneccessary complicated: either it
has to dig into snapbuilder/slot internals to learn when the snapshot
became consistent (which currently is impossible as this lsn is not
saved anywhere btw), or it must fsync each its decision to do or not to
do 2PC.

Technically, my concern covers not only tablesync, but just plain
decoding start: we don't want to ship COMMIT PREPARED if the downstream
had never had chance to see PREPARE.

As for tablesync, looking at current implementation I contemplate that
we would need to do something along the following lines:
- Tablesync worker performs COPY.
- It then speaks with main apply worker to arrange (origin)
lsn of sync point, as it does now.
- Tablesync worker applies changes up to arranged lsn; it never uses
two-phase decoding, all xacts are replayed on COMMIT PREPARED.
Moreover, instead of going into SYNCDONE state immediately after
reaching needed lsn, it stops replaying usual commits but continues
to receive changes to finish all transactions which were prepared
before sync point (we would need some additional support from
reorderbuffer to learn when this happens). Only then it goes into
SYNCDONE.
- Behaviour of the main apply worker doesn't change: it
ignores changes of the table in question before sync point and
applies them after sync point. It also can use 2PC decoding of any
transaction or not, as it desires.
I believe this approach would implement tablesync correctly (all changes
are applied, but only once) with minimal fuss.

- Currently we will call abort_prepared cb even if we failed to actually
prepare xact due to concurrent abort. I think it is confusing for
users. We should either handle this by remembering not to invoke
abort_prepared in these cases or at least document this behaviour,
leaving this problem to the receiver side.

The point is, when we reach the "ROLLBACK PREPARED", we have no idea
if the "PREPARE" was aborted by this rollback happening concurrently.
So it's possible that the 2PC has been successfully decoded and we
would have to send the rollback to the other side which would need to
check if it needs to rollback locally.

I understand this. But I find this confusing for the users, so I propose
to
- Either document that "you might get abort_prepared cb called even
after abort cb was invoked for the same transaction";
- Or consider adding some infrastructure to reorderbuffer to
remember not to call abort_prepared in these cases. Due to possible
reboots, I think this means that we need not to
ReorderBufferCleanupTXN immediately after failed attempt to replay
xact on PREPARE, but mark it as 'aborted' and keep it until we see
ABORT PREPARED record. If we see that xact is marked as aborted, we
don't call abort_prepared_cb. That way even if the decoder restarts
in between, we will see PREPARE in WAL, inquire xact status (even
if we skip it as already replayed) and mark it as aborted again.

- I find it suspicious that DecodePrepare completely ignores actions of
SnapBuildCommitTxn. For example, to execute invalidations, the latter
sets base snapshot if our xact (or subxacts) did DDL and the snapshot
not set yet. My fantasy doesn't hint me the concrete example
where this would burn at the moment, but it should be considered.

I had discussed this area with Petr and we didn't see any issues as well, then.

Ok, simplifying, what SnapBuildCommitTxn practically does is
* Decide whether we are interested in tracking this xact effects, and
if we are, mark it as committed.
* Build and distribute snapshot to all RBTXNs, if it is important.
* Set base snap of our xact if it did DDL, to execute invalidations
during replay.

I see that we don't need to do first two bullets during DecodePrepare:
xact effects are still invisible for everyone but itself after
PREPARE. As for seeing xacts own own changes, it is implemented via
logging cmin/cmax and we don't need to mark xact as committed for that
(c.f ReorderBufferCopySnap).

Regarding the third point... I think in 2PC decoding we might need to
execute invalidations twice:
1) After replaying xact on PREPARE to forget about catalog changes
xact did -- it is not yet committed and must be invisible to
other xacts until CP. In the latest patchset invalidations are
executed only if there is at least one change in xact (it has base
snap). It looks fine: we can't spoil catalogs if there were nothing
to decode. Better to explain that somewhere.
2) After decoding COMMIT PREPARED to make changes visible. In current
patchset it is always done. Actually, *this* is the reason
RBTXN might already exist when we enter ReorderBufferFinishPrepared,
not "(during restarts for example)" as comment says there:
if there were inval messages, RBTXN will be created
in DecodeCommit during their addition.

BTW, "that we might need to execute invalidations, add snapshot" in
SnapBuildCommitTxn looks like a cludge to me: I suppose it is better to
do that at ReorderBufferXidSetCatalogChanges.

Now, another issue is registering xact as committed in
SnapBuildCommitTxn during COMMIT PREPARED processing. Since RBTXN is
always purged after xact replay on PREPARE, the only medium we have for
noticing catalog changes during COMMIT PREPARED is invalidation messages
attached to the CP record. This raises the following question.
* If there is a guarantee that whenever xact makes catalog changes it
generates invalidation messages, then this code is fine. However,
currently ReorderBufferXidSetCatalogChanges is also called on
XLOG_HEAP_INPLACE processing and in SnapBuildProcessNewCid, which
is useless if such guarantee exists.
* If, on the other hand, there is no such guarantee, this code is
broken.

- I am one of those people upthread who don't think that converting
flags to bitmask is beneficial -- especially given that many of them
are mutually exclusive, e.g. xact can't be committed and aborted at
the same time. Apparently you have left this to the committer though.

Hmm, there seems to be divided opinion on this. I am willing to go
back to using the booleans if there's opposition and if the committer
so wishes. Note that this patch will end up adding 4/5 more booleans
in that case (we add new ones for prepare, commit prepare, abort,
rollback prepare etc).

Well, you can unite mutually exclusive fields into one enum or char with
macros defining possible values. Transaction can't be committed and
aborted at the same time, etc.

+      row. The <function>change_cb</function> callback may access system or
+      user catalog tables to aid in the process of outputting the row
+      modification details. In case of decoding a prepared (but yet
+      uncommitted) transaction or decoding of an uncommitted transaction, this
+      change callback is ensured sane access to catalog tables regardless of
+      simultaneous rollback by another backend of this very same transaction.

I don't think we should explain this, at least in such words. As
mentioned upthread, we should warn about allowed systable_* accesses
instead. Same for message_cb.

Looks like you are looking at an earlier patchset. The latest patchset
has removed the above.

I see, sorry.

--
Arseny Sher
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#10Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Arseny Sher (#9)
1 attachment(s)

Hello,

Trying to revive this patch which attempts to support logical decoding of
two phase transactions. I've rebased and polished Nikhil's patch on the
current HEAD. Some of the logic in the previous patchset has already been
committed as part of large-in-progress transaction commits, like the
handling of concurrent aborts, so all that logic has been left out. I think
some of the earlier comments have already been addressed or are no longer
relevant. Do have a look at the patch and let me know what you think.I will
try and address any pending issues going forward.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

0001-Support-decoding-of-two-phase-transactions-at-PREPAR.patchapplication/octet-stream; name=0001-Support-decoding-of-two-phase-transactions-at-PREPAR.patch
#11Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#10)

On Mon, Sep 7, 2020 at 10:54 AM Ajin Cherian <itsajin@gmail.com> wrote:

Hello,

Trying to revive this patch which attempts to support logical decoding of two phase transactions. I've rebased and polished Nikhil's patch on the current HEAD. Some of the logic in the previous patchset has already been committed as part of large-in-progress transaction commits, like the handling of concurrent aborts, so all that logic has been left out.

I am not sure about your point related to concurrent aborts. I think
we need some changes related to this patch. Have you tried to test
this behavior? Basically, we have the below code in
ReorderBufferProcessTXN() which will be hit for concurrent aborts, and
currently, the Asserts shown below will fail.

if (errdata->sqlerrcode == ERRCODE_TRANSACTION_ROLLBACK)
{
/*
* This error can only occur when we are sending the data in
* streaming mode and the streaming is not finished yet.
*/
Assert(streaming);
Assert(stream_started);

Nikhil has a test for the same
(0004-Teach-test_decoding-plugin-to-work-with-2PC.Jan4) in his last
email [1]. You might want to use it to test this behavior. I think you
can also keep the tests as a separate patch as Nikhil had.

One other comment:
===================
@@ -27,6 +27,7 @@ typedef struct OutputPluginOptions
{
OutputPluginOutputType output_type;
bool receive_rewrites;
+ bool enable_twophase;
} OutputPluginOptions;
..
..
@@ -684,6 +699,33 @@ startup_cb_wrapper(LogicalDecodingContext *ctx,
OutputPluginOptions *opt, bool i
/* do the actual work: call callback */
ctx->callbacks.startup_cb(ctx, opt, is_init);

+ /*
+ * If the plugin claims to support two-phase transactions, then
+ * check that the plugin implements all callbacks necessary to decode
+ * two-phase transactions - we either have to have all of them or none.
+ * The filter_prepare callback is optional, but can only be defined when
+ * two-phase decoding is enabled (i.e. the three other callbacks are
+ * defined).
+ */
+ if (opt->enable_twophase)
+ {
+ int twophase_callbacks = (ctx->callbacks.prepare_cb != NULL) +
+ (ctx->callbacks.commit_prepared_cb != NULL) +
+ (ctx->callbacks.abort_prepared_cb != NULL);
+
+ /* Plugins with incorrect number of two-phase callbacks are broken. */
+ if ((twophase_callbacks != 3) && (twophase_callbacks != 0))
+ ereport(ERROR,
+ (errmsg("Output plugin registered only %d twophase callbacks. ",
+ twophase_callbacks)));
+ }

I don't know why the patch has used this way to implement an option to
enable two-phase. Can't we use how we implement 'stream-changes'
option in commit 7259736a6e? Just refer how we set ctx->streaming and
you can use a similar way to set this parameter.

--
With Regards,
Amit Kapila.

#12Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#11)
2 attachment(s)

On Mon, Sep 7, 2020 at 11:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Nikhil has a test for the same
(0004-Teach-test_decoding-plugin-to-work-with-2PC.Jan4) in his last
email [1]. You might want to use it to test this behavior. I think you
can also keep the tests as a separate patch as Nikhil had.

Done. I've added the tests and also tweaked code to make sure that the

aborts during 2 phase commits are also handled.

I don't know why the patch has used this way to implement an option to
enable two-phase. Can't we use how we implement 'stream-changes'
option in commit 7259736a6e? Just refer how we set ctx->streaming and
you can use a similar way to set this parameter.

Done, I've moved the checks for callbacks to inside the corresponding
wrappers.

Regards,
Ajin Cherian
Fujitsu Australia

Attachments:

0001-Support-decoding-of-two-phase-transactions-at-PREPAR.patchapplication/octet-stream; name=0001-Support-decoding-of-two-phase-transactions-at-PREPAR.patch
0002-Tap-test-to-test-concurrent-aborts-during-2-phase-co.patchapplication/octet-stream; name=0002-Tap-test-to-test-concurrent-aborts-during-2-phase-co.patch
#13Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#12)

On Wed, Sep 9, 2020 at 3:33 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Mon, Sep 7, 2020 at 11:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Nikhil has a test for the same
(0004-Teach-test_decoding-plugin-to-work-with-2PC.Jan4) in his last
email [1]. You might want to use it to test this behavior. I think you
can also keep the tests as a separate patch as Nikhil had.

Done. I've added the tests and also tweaked code to make sure that the aborts during 2 phase commits are also handled.

Okay, I'll look into your changes but before that today, I have gone
through this entire thread to check if there are any design problems
and found that there were two major issues in the original proposal,
(a) one was to handle concurrent aborts which I think we should be
able to deal in a way similar to what we have done for decoding of
in-progress transactions and (b) what if someone specifically locks
pg_class or pg_attribute in exclusive mode (say be Lock pg_attribute
...), it seems the deadlock can happen in that case [0]/messages/by-id/20170328012546.473psm6546bgsi2c@alap3.anarazel.de. AFAIU, people
seem to think if there is no realistic scenario where deadlock can
happen apart from user explicitly locking the system catalog then we
might be able to get away by just ignoring such xacts to be decoded at
prepare time or would block it in some other way as any way that will
block the entire system. I am not sure what is the right thing but
something has to be done to avoid any sort of deadlock for this.

Another thing, I noticed is that originally we have subscriber-side
support as well, see [1]/messages/by-id/CAMGcDxchx=0PeQBVLzrgYG2AQ49QSRxHj5DCp7yy0QrJR0S0nA@mail.gmail.com (see *pgoutput* patch) but later dropped it
due to some reasons [2]/messages/by-id/CAMGcDxc-kuO9uq0zRCRwbHWBj_rePY9=raR7M9pZGWoj9EOGdg@mail.gmail.com. I think we should have pgoutput support as
well, so see what is required to get that incorporated.

I would also like to summarize my thinking on the usefulness of this
feature. One of the authors of this patch Stats wants this for a
conflict-free logical replication, see more details [3]/messages/by-id/CAMsr+YHQzGxnR-peT4SbX2-xiG2uApJMTgZ4a3TiRBM6COyfqg@mail.gmail.com. Craig seems
to suggest [3]/messages/by-id/CAMsr+YHQzGxnR-peT4SbX2-xiG2uApJMTgZ4a3TiRBM6COyfqg@mail.gmail.com that this will allow us to avoid conflicting schema
changes at different nodes though it is not clear to me if that is
possible without some external code support because we don't send
schema changes in logical replication, maybe Craig can shed some light
on this. Another use-case, I am thinking is if this can be used for
scaling-out reads as well. Because of 2PC, we can ensure that on
subscribers we have all the data committed on the master. Now, we can
design a system where different nodes are owners of some set of tables
and we can always get the data of those tables reliably from those
nodes, and then one can have some external process that will route the
reads accordingly. I know that the last idea is a bit of a hand-waving
but it seems to be possible after this feature.

[0]: /messages/by-id/20170328012546.473psm6546bgsi2c@alap3.anarazel.de
[1]: /messages/by-id/CAMGcDxchx=0PeQBVLzrgYG2AQ49QSRxHj5DCp7yy0QrJR0S0nA@mail.gmail.com
[2]: /messages/by-id/CAMGcDxc-kuO9uq0zRCRwbHWBj_rePY9=raR7M9pZGWoj9EOGdg@mail.gmail.com
[3]: /messages/by-id/CAMsr+YHQzGxnR-peT4SbX2-xiG2uApJMTgZ4a3TiRBM6COyfqg@mail.gmail.com

--
With Regards,
Amit Kapila.

#14Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#13)
3 attachment(s)

On Sat, Sep 12, 2020 at 9:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Another thing, I noticed is that originally we have subscriber-side
support as well, see [1] (see *pgoutput* patch) but later dropped it
due to some reasons [2]. I think we should have pgoutput support as
well, so see what is required to get that incorporated.

I have added the rebased patch-set for pgoutput and subscriber side

changes as well. This also includes a test case in subscriber.

regards,
Ajin Cherian

Attachments:

0001-Support-decoding-of-two-phase-transactions.patchapplication/octet-stream; name=0001-Support-decoding-of-two-phase-transactions.patch
0002-Tap-test-to-test-concurrent-aborts-during-2-phase-co.patchapplication/octet-stream; name=0002-Tap-test-to-test-concurrent-aborts-during-2-phase-co.patch
0003-pgoutput-output-plugin-support-for-logical-decoding-.patchapplication/octet-stream; name=0003-pgoutput-output-plugin-support-for-logical-decoding-.patch
#15Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#12)

On Wed, Sep 9, 2020 at 3:33 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Mon, Sep 7, 2020 at 11:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Nikhil has a test for the same
(0004-Teach-test_decoding-plugin-to-work-with-2PC.Jan4) in his last
email [1]. You might want to use it to test this behavior. I think you
can also keep the tests as a separate patch as Nikhil had.

Done. I've added the tests and also tweaked code to make sure that the aborts during 2 phase commits are also handled.

I don't think it is complete yet.
*
* This error can only occur when we are sending the data in
  * streaming mode and the streaming is not finished yet.
  */
- Assert(streaming);
- Assert(stream_started);
+ Assert(streaming || rbtxn_prepared(txn));
+ Assert(stream_started  || rbtxn_prepared(txn));

Here, you have updated the code but comments are still not updated.

*
@@ -2370,10 +2391,19 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
errdata = NULL;
curtxn->concurrent_abort = true;

- /* Reset the TXN so that it is allowed to stream remaining data. */
- ReorderBufferResetTXN(rb, txn, snapshot_now,
-   command_id, prev_lsn,
-   specinsert);
+ /* If streaming, reset the TXN so that it is allowed to stream
remaining data. */
+ if (streaming && stream_started)
+ {
+ ReorderBufferResetTXN(rb, txn, snapshot_now,
+   command_id, prev_lsn,
+   specinsert);
+ }
+ else
+ {
+ elog(LOG, "stopping decoding of %s (%u)",
+ txn->gid[0] != '\0'? txn->gid:"", txn->xid);
+ rb->abort(rb, txn, commit_lsn);
+ }

I don't think we need to perform abort here. Later we will anyway
encounter the WAL for Rollback Prepared for which we will call
abort_prepared_cb. As we have set the 'concurrent_abort' flag, it will
allow us to skip all the intermediate records. Here, we need only
enough state in ReorderBufferTxn that it can be later used for
ReorderBufferFinishPrepared(). Basically, you need functionality
similar to ReorderBufferTruncateTXN where except for invalidations you
can free memory for everything else. You can either write a new
function ReorderBufferTruncatePreparedTxn or pass another bool
parameter in ReorderBufferTruncateTXN to indicate it is prepared_xact
and then clean up additional things that are not required for prepared
xact.

*
Similarly, I don't understand why we need below code:
ReorderBufferProcessTXN()
{
..
+ if (rbtxn_rollback(txn))
+ rb->abort(rb, txn, commit_lsn);
..
}

There is nowhere we are setting the RBTXN_ROLLBACK flag, so how will
this check be true? If we decide to remove this code then don't forget
to update the comments.

*
If my previous two comments are correct then I don't think we need the
below interface.
+    <sect3 id="logicaldecoding-output-plugin-abort">
+     <title>Transaction Abort Callback</title>
+
+     <para>
+      The required <function>abort_cb</function> callback is called whenever
+      a transaction abort has to be initiated. This can happen if we are
+      decoding a transaction that has been prepared for two-phase commit and
+      a concurrent rollback happens while we are decoding it.
+<programlisting>
+typedef void (*LogicalDecodeAbortCB) (struct LogicalDecodingContext *ctx,
+                                       ReorderBufferTXN *txn,
+                                       XLogRecPtr abort_lsn);

I don't know why the patch has used this way to implement an option to
enable two-phase. Can't we use how we implement 'stream-changes'
option in commit 7259736a6e? Just refer how we set ctx->streaming and
you can use a similar way to set this parameter.

Done, I've moved the checks for callbacks to inside the corresponding wrappers.

This is not what I suggested. Please study the commit 7259736a6e and
see how streaming option is implemented. I want later subscribers can
specify whether they want transactions to be decoded at prepare time
similar to what we have done for streaming. Also, search for
ctx->streaming in the code and see how it is set to get the idea.

Note: Please use version number while sending patches, you can use
something like git format-patch -N -v n to do that. It makes easier
for the reviewer to compare it with the previous version.

Few other comments:
===================
1.
ReorderBufferProcessTXN()
{
..
if (streaming)
{
ReorderBufferTruncateTXN(rb, txn);

/* Reset the CheckXidAlive */
CheckXidAlive = InvalidTransactionId;
}
else
ReorderBufferCleanupTXN(rb, txn);
..
}

I don't think we can perform ReorderBufferCleanupTXN for the prepared
transactions because if we have removed the ReorderBufferTxn before
commit, the later code might not consider such a transaction in the
system and compute the wrong value of restart_lsn for a slot.
Basically, in SnapBuildProcessRunningXacts() when we call
ReorderBufferGetOldestTXN(), it should show the ReorderBufferTxn of
the prepared transaction which is not yet committed but because we
have removed it after prepare, it won't get that TXN and then that
leads to wrong computation of restart_lsn. Once we start from a wrong
point in WAL, the snapshot built was incorrect which will lead to the
wrong result. This is the same reason why the patch is not doing
ReorderBufferForget in DecodePrepare when we decide to skip the
transaction. Also, here, we need to set CheckXidAlive =
InvalidTransactionId; for prepared xact as well.

2. Have you thought about the interaction of streaming with prepared
transactions? You can try writing some tests using pg_logical* APIs
and see the behaviour. For ex. there is no handling in
ReorderBufferStreamCommit for the same. I think you need to introduce
stream_prepare API similar to stream_commit and then use the same.

3.
- if (streaming)
+ if (streaming || rbtxn_prepared(change->txn))
  {
  curtxn = change->txn;
  SetupCheckXidLive(curtxn->xid);
@@ -2249,7 +2254,6 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
  break;
  }
  }
-
  /*

Spurious line removal.

--
With Regards,
Amit Kapila.

#16Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#14)

On Tue, Sep 15, 2020 at 5:27 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Sat, Sep 12, 2020 at 9:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Another thing, I noticed is that originally we have subscriber-side
support as well, see [1] (see *pgoutput* patch) but later dropped it
due to some reasons [2]. I think we should have pgoutput support as
well, so see what is required to get that incorporated.

I have added the rebased patch-set for pgoutput and subscriber side changes as well. This also includes a test case in subscriber.

As mentioned in my email there were some reasons due to which the
support has been left for later, have you checked those and if so, can
you please explain how you have addressed those or why they are not
relevant now if that is the case?

--
With Regards,
Amit Kapila.

#17Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#15)

On Tue, Sep 15, 2020 at 10:43 PM Amit Kapila <amit.kapila16@gmail.com>
wrote:

Few other comments:
===================
1.
ReorderBufferProcessTXN()
{
..
if (streaming)
{
ReorderBufferTruncateTXN(rb, txn);

/* Reset the CheckXidAlive */
CheckXidAlive = InvalidTransactionId;
}
else
ReorderBufferCleanupTXN(rb, txn);
..
}

I don't think we can perform ReorderBufferCleanupTXN for the prepared
transactions because if we have removed the ReorderBufferTxn before
commit, the later code might not consider such a transaction in the
system and compute the wrong value of restart_lsn for a slot.
Basically, in SnapBuildProcessRunningXacts() when we call
ReorderBufferGetOldestTXN(), it should show the ReorderBufferTxn of
the prepared transaction which is not yet committed but because we
have removed it after prepare, it won't get that TXN and then that
leads to wrong computation of restart_lsn. Once we start from a wrong
point in WAL, the snapshot built was incorrect which will lead to the
wrong result. This is the same reason why the patch is not doing
ReorderBufferForget in DecodePrepare when we decide to skip the
transaction. Also, here, we need to set CheckXidAlive =
InvalidTransactionId; for prepared xact as well.

Just to confirm what you are expecting here. so after we send out the
prepare transaction to the plugin, you are suggesting to NOT do a
ReorderBufferCleanupTXN, but what to do instead?. Are you suggesting to do
what you suggested
as part of concurrent abort handling? Something equivalent
to ReorderBufferTruncateTXN()? remove all changes of the transaction but
keep the invalidations and tuplecids etc? Do you think we should have a new
flag in txn to indicate that this transaction has already been decoded?
(prepare_decoded?) Any other special handling you think is required?

regards,
Ajin Cherian
Fujitsu Australia

#18Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#17)

On Thu, Sep 17, 2020 at 2:02 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Tue, Sep 15, 2020 at 10:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Few other comments:
===================
1.
ReorderBufferProcessTXN()
{
..
if (streaming)
{
ReorderBufferTruncateTXN(rb, txn);

/* Reset the CheckXidAlive */
CheckXidAlive = InvalidTransactionId;
}
else
ReorderBufferCleanupTXN(rb, txn);
..
}

I don't think we can perform ReorderBufferCleanupTXN for the prepared
transactions because if we have removed the ReorderBufferTxn before
commit, the later code might not consider such a transaction in the
system and compute the wrong value of restart_lsn for a slot.
Basically, in SnapBuildProcessRunningXacts() when we call
ReorderBufferGetOldestTXN(), it should show the ReorderBufferTxn of
the prepared transaction which is not yet committed but because we
have removed it after prepare, it won't get that TXN and then that
leads to wrong computation of restart_lsn. Once we start from a wrong
point in WAL, the snapshot built was incorrect which will lead to the
wrong result. This is the same reason why the patch is not doing
ReorderBufferForget in DecodePrepare when we decide to skip the
transaction. Also, here, we need to set CheckXidAlive =
InvalidTransactionId; for prepared xact as well.

Just to confirm what you are expecting here. so after we send out the prepare transaction to the plugin, you are suggesting to NOT do a ReorderBufferCleanupTXN, but what to do instead?. Are you suggesting to do what you suggested
as part of concurrent abort handling?

Yes.

Something equivalent to ReorderBufferTruncateTXN()? remove all changes of the transaction but keep the invalidations and tuplecids etc?

I don't think you don't need tuplecids. I have checked
ReorderBufferFinishPrepared() and that seems to require only
invalidations, check if anything else is required.

Do you think we should have a new flag in txn to indicate that this transaction has already been decoded? (prepare_decoded?)

Yeah, I think that would be better. How about if name the new variable
as cleanup_prepared?

--
With Regards,
Amit Kapila.

#19Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#15)
3 attachment(s)

On Tue, Sep 15, 2020 at 10:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I don't think it is complete yet.
*
* This error can only occur when we are sending the data in
* streaming mode and the streaming is not finished yet.
*/
- Assert(streaming);
- Assert(stream_started);
+ Assert(streaming || rbtxn_prepared(txn));
+ Assert(stream_started  || rbtxn_prepared(txn));

Here, you have updated the code but comments are still not updated.

Updated the comments.

I don't think we need to perform abort here. Later we will anyway
encounter the WAL for Rollback Prepared for which we will call
abort_prepared_cb. As we have set the 'concurrent_abort' flag, it will
allow us to skip all the intermediate records. Here, we need only
enough state in ReorderBufferTxn that it can be later used for
ReorderBufferFinishPrepared(). Basically, you need functionality
similar to ReorderBufferTruncateTXN where except for invalidations you
can free memory for everything else. You can either write a new
function ReorderBufferTruncatePreparedTxn or pass another bool
parameter in ReorderBufferTruncateTXN to indicate it is prepared_xact
and then clean up additional things that are not required for prepared
xact.

Added a new parameter to ReorderBufferTruncatePreparedTxn for
prepared transactions and did cleanup of tupulecids as well, I have
left snapshots and transactions.
As a result of this, I also had to create a new function
ReorderBufferCleanupPreparedTXN which will clean up the rest as part
of FinishPrepared handling as we can't call
ReorderBufferCleanupTXN again after this.

*
Similarly, I don't understand why we need below code:
ReorderBufferProcessTXN()
{
..
+ if (rbtxn_rollback(txn))
+ rb->abort(rb, txn, commit_lsn);
..
}

There is nowhere we are setting the RBTXN_ROLLBACK flag, so how will
this check be true? If we decide to remove this code then don't forget
to update the comments.

Removed.

*
If my previous two comments are correct then I don't think we need the
below interface.
+    <sect3 id="logicaldecoding-output-plugin-abort">
+     <title>Transaction Abort Callback</title>
+
+     <para>
+      The required <function>abort_cb</function> callback is called whenever
+      a transaction abort has to be initiated. This can happen if we are
+      decoding a transaction that has been prepared for two-phase commit and
+      a concurrent rollback happens while we are decoding it.
+<programlisting>
+typedef void (*LogicalDecodeAbortCB) (struct LogicalDecodingContext *ctx,
+                                       ReorderBufferTXN *txn,
+                                       XLogRecPtr abort_lsn);

Removed.

I don't know why the patch has used this way to implement an option to
enable two-phase. Can't we use how we implement 'stream-changes'
option in commit 7259736a6e? Just refer how we set ctx->streaming and
you can use a similar way to set this parameter.

Done, I've moved the checks for callbacks to inside the corresponding wrappers.

This is not what I suggested. Please study the commit 7259736a6e and
see how streaming option is implemented. I want later subscribers can
specify whether they want transactions to be decoded at prepare time
similar to what we have done for streaming. Also, search for
ctx->streaming in the code and see how it is set to get the idea.

Changed it similar to ctx->streaming logic.

Note: Please use version number while sending patches, you can use
something like git format-patch -N -v n to do that. It makes easier
for the reviewer to compare it with the previous version.

Done.

Few other comments:
===================
1.
ReorderBufferProcessTXN()
{
..
if (streaming)
{
ReorderBufferTruncateTXN(rb, txn);

/* Reset the CheckXidAlive */
CheckXidAlive = InvalidTransactionId;
}
else
ReorderBufferCleanupTXN(rb, txn);
..
}

I don't think we can perform ReorderBufferCleanupTXN for the prepared
transactions because if we have removed the ReorderBufferTxn before
commit, the later code might not consider such a transaction in the
system and compute the wrong value of restart_lsn for a slot.
Basically, in SnapBuildProcessRunningXacts() when we call
ReorderBufferGetOldestTXN(), it should show the ReorderBufferTxn of
the prepared transaction which is not yet committed but because we
have removed it after prepare, it won't get that TXN and then that
leads to wrong computation of restart_lsn. Once we start from a wrong
point in WAL, the snapshot built was incorrect which will lead to the
wrong result. This is the same reason why the patch is not doing
ReorderBufferForget in DecodePrepare when we decide to skip the
transaction. Also, here, we need to set CheckXidAlive =
InvalidTransactionId; for prepared xact as well.

Updated as suggested above.

2. Have you thought about the interaction of streaming with prepared
transactions? You can try writing some tests using pg_logical* APIs
and see the behaviour. For ex. there is no handling in
ReorderBufferStreamCommit for the same. I think you need to introduce
stream_prepare API similar to stream_commit and then use the same.

This is pending. I will look at it in the next iteration. Also pending
is the investigation as to why the pgoutput changes were not added
initially.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v4-0002-Tap-test-to-test-concurrent-aborts-during-2-phase.patchapplication/octet-stream; name=v4-0002-Tap-test-to-test-concurrent-aborts-during-2-phase.patch
v4-0001-Support-decoding-of-two-phase-transactions.patchapplication/octet-stream; name=v4-0001-Support-decoding-of-two-phase-transactions.patch
v4-0003-pgoutput-output-plugin-support-for-logical-decodi.patchapplication/octet-stream; name=v4-0003-pgoutput-output-plugin-support-for-logical-decodi.patch
#20Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#18)

On Thu, Sep 17, 2020 at 10:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Yeah, I think that would be better. How about if name the new variable
as cleanup_prepared?

I haven't added a new flag to indicate that the prepare was cleaned
up, as that wasn' really necessary. Instead I used a new function to
do partial cleanup to do whatever was not done in the truncate. If you
think, using a flag and doing special handling in
ReorderBufferCleanupTXN was a better idea, let me know.

regards,
Ajin Cherian
Fujitsu Australia

#21Dilip Kumar
Dilip Kumar
dilipbalaut@gmail.com
In reply to: Ajin Cherian (#19)

On Fri, Sep 18, 2020 at 6:02 PM Ajin Cherian <itsajin@gmail.com> wrote:

I have reviewed v4-0001 patch and I have a few comments. I haven't
yet completely reviewed the patch.

1.
+ /*
+ * Process invalidation messages, even if we're not interested in the
+ * transaction's contents, since the various caches need to always be
+ * consistent.
+ */
+ if (parsed->nmsgs > 0)
+ {
+ if (!ctx->fast_forward)
+ ReorderBufferAddInvalidations(ctx->reorder, xid, buf->origptr,
+   parsed->nmsgs, parsed->msgs);
+ ReorderBufferXidSetCatalogChanges(ctx->reorder, xid, buf->origptr);
+ }
+

I think we don't need to add prepare time invalidation messages as we now we
are already logging the invalidations at the command level and adding them to
reorder buffer.

2.

+ /*
+ * Tell the reorderbuffer about the surviving subtransactions. We need to
+ * do this because the main transaction itself has not committed since we
+ * are in the prepare phase right now. So we need to be sure the snapshot
+ * is setup correctly for the main transaction in case all changes
+ * happened in subtransanctions
+ */
+ for (i = 0; i < parsed->nsubxacts; i++)
+ {
+ ReorderBufferCommitChild(ctx->reorder, xid, parsed->subxacts[i],
+ buf->origptr, buf->endptr);
+ }
+
+ if (SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) ||
+ (parsed->dbId != InvalidOid && parsed->dbId != ctx->slot->data.database) ||
+ ctx->fast_forward || FilterByOrigin(ctx, origin_id))
+ return;

Do we need to call ReorderBufferCommitChild if we are skiping this transaction?
I think the below check should be before calling ReorderBufferCommitChild.

3.

+ /*
+ * If it's ROLLBACK PREPARED then handle it via callbacks.
+ */
+ if (TransactionIdIsValid(xid) &&
+ !SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) &&
+ parsed->dbId == ctx->slot->data.database &&
+ !FilterByOrigin(ctx, origin_id) &&
+ ReorderBufferTxnIsPrepared(ctx->reorder, xid, parsed->twophase_gid))
+ {
+ ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr, buf->endptr,
+ commit_time, origin_id, origin_lsn,
+ parsed->twophase_gid, false);
+ return;
+ }

I think we have already checked !SnapBuildXactNeedsSkip, parsed->dbId
== ctx->slot->data.database and !FilterByOrigin in DecodePrepare
so if those are not true then we wouldn't have prepared this
transaction i.e. ReorderBufferTxnIsPrepared will be false so why do we
need
to recheck this conditions.

4.

+ /* If streaming, reset the TXN so that it is allowed to stream
remaining data. */
+ if (streaming && stream_started)
+ {
+ ReorderBufferResetTXN(rb, txn, snapshot_now,
+   command_id, prev_lsn,
+   specinsert);
+ }
+ else
+ {
+ elog(LOG, "stopping decoding of %s (%u)",
+ txn->gid[0] != '\0'? txn->gid:"", txn->xid);
+ ReorderBufferTruncateTXN(rb, txn, true);
+ }

Why only if (streaming) is not enough? I agree if we are coming here
and it is streaming mode then streaming started must be true
but we already have an assert for that.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#22Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#19)

On Fri, Sep 18, 2020 at 6:02 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Tue, Sep 15, 2020 at 10:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I don't think it is complete yet.
*
* This error can only occur when we are sending the data in
* streaming mode and the streaming is not finished yet.
*/
- Assert(streaming);
- Assert(stream_started);
+ Assert(streaming || rbtxn_prepared(txn));
+ Assert(stream_started  || rbtxn_prepared(txn));

Here, you have updated the code but comments are still not updated.

Updated the comments.

I don't think we need to perform abort here. Later we will anyway
encounter the WAL for Rollback Prepared for which we will call
abort_prepared_cb. As we have set the 'concurrent_abort' flag, it will
allow us to skip all the intermediate records. Here, we need only
enough state in ReorderBufferTxn that it can be later used for
ReorderBufferFinishPrepared(). Basically, you need functionality
similar to ReorderBufferTruncateTXN where except for invalidations you
can free memory for everything else. You can either write a new
function ReorderBufferTruncatePreparedTxn or pass another bool
parameter in ReorderBufferTruncateTXN to indicate it is prepared_xact
and then clean up additional things that are not required for prepared
xact.

Added a new parameter to ReorderBufferTruncatePreparedTxn for
prepared transactions and did cleanup of tupulecids as well, I have
left snapshots and transactions.
As a result of this, I also had to create a new function
ReorderBufferCleanupPreparedTXN which will clean up the rest as part
of FinishPrepared handling as we can't call
ReorderBufferCleanupTXN again after this.

Why can't we call ReorderBufferCleanupTXN() from
ReorderBufferFinishPrepared after your changes?

+ * If streaming, keep the remaining info - transactions, tuplecids,
invalidations and
+ * snapshots.If after a PREPARE, keep only the invalidations and snapshots.
  */
 static void
-ReorderBufferTruncateTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
+ReorderBufferTruncateTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
bool txn_prepared)

Why do we need even snapshot for Prepared transactions? Also, note
that in the comment there is no space before you start a new line.

I don't know why the patch has used this way to implement an option to
enable two-phase. Can't we use how we implement 'stream-changes'
option in commit 7259736a6e? Just refer how we set ctx->streaming and
you can use a similar way to set this parameter.

Done, I've moved the checks for callbacks to inside the corresponding wrappers.

This is not what I suggested. Please study the commit 7259736a6e and
see how streaming option is implemented. I want later subscribers can
specify whether they want transactions to be decoded at prepare time
similar to what we have done for streaming. Also, search for
ctx->streaming in the code and see how it is set to get the idea.

Changed it similar to ctx->streaming logic.

Hmm, I still don't see changes relevant changes in pg_decode_startup().

--
With Regards,
Amit Kapila.

#23Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#21)

On Sun, Sep 20, 2020 at 11:01 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Fri, Sep 18, 2020 at 6:02 PM Ajin Cherian <itsajin@gmail.com> wrote:

3.

+ /*
+ * If it's ROLLBACK PREPARED then handle it via callbacks.
+ */
+ if (TransactionIdIsValid(xid) &&
+ !SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) &&
+ parsed->dbId == ctx->slot->data.database &&
+ !FilterByOrigin(ctx, origin_id) &&
+ ReorderBufferTxnIsPrepared(ctx->reorder, xid, parsed->twophase_gid))
+ {
+ ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr, buf->endptr,
+ commit_time, origin_id, origin_lsn,
+ parsed->twophase_gid, false);
+ return;
+ }

I think we have already checked !SnapBuildXactNeedsSkip, parsed->dbId
== ctx->slot->data.database and !FilterByOrigin in DecodePrepare
so if those are not true then we wouldn't have prepared this
transaction i.e. ReorderBufferTxnIsPrepared will be false so why do we
need
to recheck this conditions.

Yeah, probably we should have Assert for below three conditions:
+ !SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) &&
+ parsed->dbId == ctx->slot->data.database &&
+ !FilterByOrigin(ctx, origin_id) &&

Your other comments make sense to me.

--
With Regards,
Amit Kapila.

#24Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#22)

Why can't we call ReorderBufferCleanupTXN() from
ReorderBufferFinishPrepared after your changes?

Since the truncate already removed the changes, it would fail on the
below Assert in ReorderBufferCleanupTXN()
/* Check we're not mixing changes from different transactions. */
Assert(change->txn == txn);

regards.
Ajin Cherian
Fujitsu Australia

#25Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#24)

On Mon, Sep 21, 2020 at 12:36 PM Ajin Cherian <itsajin@gmail.com> wrote:

Why can't we call ReorderBufferCleanupTXN() from
ReorderBufferFinishPrepared after your changes?

Since the truncate already removed the changes, it would fail on the
below Assert in ReorderBufferCleanupTXN()
/* Check we're not mixing changes from different transactions. */
Assert(change->txn == txn);

The changes list should be empty by that time because we removing each
change from the list:, see code "dlist_delete(&change->node);" in
ReorderBufferTruncateTXN. If you are hitting the Assert as you
mentioned then I think the problem is something else.

--
With Regards,
Amit Kapila.

#26Dilip Kumar
Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#23)

On Mon, Sep 21, 2020 at 10:20 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Sun, Sep 20, 2020 at 11:01 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Fri, Sep 18, 2020 at 6:02 PM Ajin Cherian <itsajin@gmail.com> wrote:

3.

+ /*
+ * If it's ROLLBACK PREPARED then handle it via callbacks.
+ */
+ if (TransactionIdIsValid(xid) &&
+ !SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) &&
+ parsed->dbId == ctx->slot->data.database &&
+ !FilterByOrigin(ctx, origin_id) &&
+ ReorderBufferTxnIsPrepared(ctx->reorder, xid, parsed->twophase_gid))
+ {
+ ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr, buf->endptr,
+ commit_time, origin_id, origin_lsn,
+ parsed->twophase_gid, false);
+ return;
+ }

I think we have already checked !SnapBuildXactNeedsSkip, parsed->dbId
== ctx->slot->data.database and !FilterByOrigin in DecodePrepare
so if those are not true then we wouldn't have prepared this
transaction i.e. ReorderBufferTxnIsPrepared will be false so why do we
need
to recheck this conditions.

Yeah, probably we should have Assert for below three conditions:
+ !SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) &&
+ parsed->dbId == ctx->slot->data.database &&
+ !FilterByOrigin(ctx, origin_id) &&

+1

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#27Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Dilip Kumar (#21)

On Sun, Sep 20, 2020 at 3:31 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

+ /*
+ * If it's ROLLBACK PREPARED then handle it via callbacks.
+ */
+ if (TransactionIdIsValid(xid) &&
+ !SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) &&
+ parsed->dbId == ctx->slot->data.database &&
+ !FilterByOrigin(ctx, origin_id) &&
+ ReorderBufferTxnIsPrepared(ctx->reorder, xid, parsed->twophase_gid))
+ {
+ ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr, buf->endptr,
+ commit_time, origin_id, origin_lsn,
+ parsed->twophase_gid, false);
+ return;
+ }

I think we have already checked !SnapBuildXactNeedsSkip, parsed->dbId
== ctx->slot->data.database and !FilterByOrigin in DecodePrepare
so if those are not true then we wouldn't have prepared this
transaction i.e. ReorderBufferTxnIsPrepared will be false so why do we
need
to recheck this conditions.

We could enter DecodeAbort even without a prepare, as the code is
common for both XLOG_XACT_ABORT and XLOG_XACT_ABORT_PREPARED. So, the
conditions !SnapBuildXactNeedsSkip, parsed->dbId

== ctx->slot->data.database and !FilterByOrigin could be true but the transaction is not prepared, then we dont need to do a ReorderBufferFinishPrepared (with commit flag false) but called ReorderBufferAbort. But I think there is a problem, if those conditions are in fact false, then we should return without trying to Abort using ReorderBufferAbort, what do you think?

I agree with all your other comments.

regards,
Ajin
Fujitsu Australia

#28Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#27)

On Mon, Sep 21, 2020 at 3:45 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Sun, Sep 20, 2020 at 3:31 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

+ /*
+ * If it's ROLLBACK PREPARED then handle it via callbacks.
+ */
+ if (TransactionIdIsValid(xid) &&
+ !SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) &&
+ parsed->dbId == ctx->slot->data.database &&
+ !FilterByOrigin(ctx, origin_id) &&
+ ReorderBufferTxnIsPrepared(ctx->reorder, xid, parsed->twophase_gid))
+ {
+ ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr, buf->endptr,
+ commit_time, origin_id, origin_lsn,
+ parsed->twophase_gid, false);
+ return;
+ }

I think we have already checked !SnapBuildXactNeedsSkip, parsed->dbId
== ctx->slot->data.database and !FilterByOrigin in DecodePrepare
so if those are not true then we wouldn't have prepared this
transaction i.e. ReorderBufferTxnIsPrepared will be false so why do we
need
to recheck this conditions.

We could enter DecodeAbort even without a prepare, as the code is
common for both XLOG_XACT_ABORT and XLOG_XACT_ABORT_PREPARED. So, the
conditions !SnapBuildXactNeedsSkip, parsed->dbId

== ctx->slot->data.database and !FilterByOrigin could be true but the transaction is not prepared, then we dont need to do a ReorderBufferFinishPrepared (with commit flag false) but called ReorderBufferAbort. But I think there is a problem, if those conditions are in fact false, then we should return without trying to Abort using ReorderBufferAbort, what do you think?

I think we need to call ReorderBufferAbort at least to clean up the
TXN. Also, if what you are saying is correct then that should be true
without this patch as well, no? If so, we don't need to worry about it
as far as this patch is concerned.

--
With Regards,
Amit Kapila.

#29Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#28)

On Mon, Sep 21, 2020 at 9:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I think we need to call ReorderBufferAbort at least to clean up the
TXN. Also, if what you are saying is correct then that should be true
without this patch as well, no? If so, we don't need to worry about it
as far as this patch is concerned.

Yes, that is true. So will change this check to:

if (TransactionIdIsValid(xid) &&
ReorderBufferTxnIsPrepared(ctx->reorder, xid, parsed->twophase_gid)

regards,
Ajin Cherian
Fujitsu Australia

#30Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#29)

On Mon, Sep 21, 2020 at 5:23 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Mon, Sep 21, 2020 at 9:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I think we need to call ReorderBufferAbort at least to clean up the
TXN. Also, if what you are saying is correct then that should be true
without this patch as well, no? If so, we don't need to worry about it
as far as this patch is concerned.

Yes, that is true. So will change this check to:

if (TransactionIdIsValid(xid) &&
ReorderBufferTxnIsPrepared(ctx->reorder, xid, parsed->twophase_gid)

Yeah and add the Assert for skip conditions as asked above.

--
With Regards,
Amit Kapila.

#31Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Dilip Kumar (#21)
3 attachment(s)

On Sun, Sep 20, 2020 at 3:31 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

1.
+ /*
+ * Process invalidation messages, even if we're not interested in the
+ * transaction's contents, since the various caches need to always be
+ * consistent.
+ */
+ if (parsed->nmsgs > 0)
+ {
+ if (!ctx->fast_forward)
+ ReorderBufferAddInvalidations(ctx->reorder, xid, buf->origptr,
+   parsed->nmsgs, parsed->msgs);
+ ReorderBufferXidSetCatalogChanges(ctx->reorder, xid, buf->origptr);
+ }
+

I think we don't need to add prepare time invalidation messages as we now we
are already logging the invalidations at the command level and adding them to
reorder buffer.

Removed.

2.

+ /*
+ * Tell the reorderbuffer about the surviving subtransactions. We need to
+ * do this because the main transaction itself has not committed since we
+ * are in the prepare phase right now. So we need to be sure the snapshot
+ * is setup correctly for the main transaction in case all changes
+ * happened in subtransanctions
+ */
+ for (i = 0; i < parsed->nsubxacts; i++)
+ {
+ ReorderBufferCommitChild(ctx->reorder, xid, parsed->subxacts[i],
+ buf->origptr, buf->endptr);
+ }
+
+ if (SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) ||
+ (parsed->dbId != InvalidOid && parsed->dbId != ctx->slot->data.database) ||
+ ctx->fast_forward || FilterByOrigin(ctx, origin_id))
+ return;

Do we need to call ReorderBufferCommitChild if we are skiping this transaction?
I think the below check should be before calling ReorderBufferCommitChild.

Done.

3.

+ /*
+ * If it's ROLLBACK PREPARED then handle it via callbacks.
+ */
+ if (TransactionIdIsValid(xid) &&
+ !SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) &&
+ parsed->dbId == ctx->slot->data.database &&
+ !FilterByOrigin(ctx, origin_id) &&
+ ReorderBufferTxnIsPrepared(ctx->reorder, xid, parsed->twophase_gid))
+ {
+ ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr, buf->endptr,
+ commit_time, origin_id, origin_lsn,
+ parsed->twophase_gid, false);
+ return;
+ }

I think we have already checked !SnapBuildXactNeedsSkip, parsed->dbId
== ctx->slot->data.database and !FilterByOrigin in DecodePrepare
so if those are not true then we wouldn't have prepared this
transaction i.e. ReorderBufferTxnIsPrepared will be false so why do we
need
to recheck this conditions.

I didnt change this, as I am seeing cases where the Abort is getting
called for transactions that needs to be skipped. I also see that the
same check is there both in DecodePrepare and DecodeCommit.
So, while the same transactions were not getting prepared or
committed, it tries to get ROLLBACK PREPARED (as part of finish
prepared handling). The check in if ReorderBufferTxnIsPrepared() is
also not proper. I will need to relook
this logic again in a future patch.

4.

+ /* If streaming, reset the TXN so that it is allowed to stream
remaining data. */
+ if (streaming && stream_started)
+ {
+ ReorderBufferResetTXN(rb, txn, snapshot_now,
+   command_id, prev_lsn,
+   specinsert);
+ }
+ else
+ {
+ elog(LOG, "stopping decoding of %s (%u)",
+ txn->gid[0] != '\0'? txn->gid:"", txn->xid);
+ ReorderBufferTruncateTXN(rb, txn, true);
+ }

Why only if (streaming) is not enough? I agree if we are coming here
and it is streaming mode then streaming started must be true
but we already have an assert for that.

Changed.

Amit,

I have also changed test_decode startup to support two_phase commits
only if specified similar to how it was done for streaming. I have
also changed the test cases accordingly. However, I have not added it
to the pgoutput startup as that would require create subscription
changes. I will do that in a future patch. Some other pending changes
are:

1. Remove snapshots on prepare truncate.
2. Look at why ReorderBufferCleanupTXN is failing after a
ReorderBufferTruncateTXN
3. Add prepare support to streaming

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v5-0001-Support-decoding-of-two-phase-transactions.patchapplication/octet-stream; name=v5-0001-Support-decoding-of-two-phase-transactions.patch
v5-0002-Tap-test-to-test-concurrent-aborts-during-2-phase.patchapplication/octet-stream; name=v5-0002-Tap-test-to-test-concurrent-aborts-during-2-phase.patch
v5-0003-pgoutput-output-plugin-support-for-logical-decodi.patchapplication/octet-stream; name=v5-0003-pgoutput-output-plugin-support-for-logical-decodi.patch
#32Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#31)

On Tue, Sep 22, 2020 at 5:18 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Sun, Sep 20, 2020 at 3:31 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

3.

+ /*
+ * If it's ROLLBACK PREPARED then handle it via callbacks.
+ */
+ if (TransactionIdIsValid(xid) &&
+ !SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) &&
+ parsed->dbId == ctx->slot->data.database &&
+ !FilterByOrigin(ctx, origin_id) &&
+ ReorderBufferTxnIsPrepared(ctx->reorder, xid, parsed->twophase_gid))
+ {
+ ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr, buf->endptr,
+ commit_time, origin_id, origin_lsn,
+ parsed->twophase_gid, false);
+ return;
+ }

I think we have already checked !SnapBuildXactNeedsSkip, parsed->dbId
== ctx->slot->data.database and !FilterByOrigin in DecodePrepare
so if those are not true then we wouldn't have prepared this
transaction i.e. ReorderBufferTxnIsPrepared will be false so why do we
need
to recheck this conditions.

I didnt change this, as I am seeing cases where the Abort is getting
called for transactions that needs to be skipped. I also see that the
same check is there both in DecodePrepare and DecodeCommit.
So, while the same transactions were not getting prepared or
committed, it tries to get ROLLBACK PREPARED (as part of finish
prepared handling). The check in if ReorderBufferTxnIsPrepared() is
also not proper.

If the transaction is prepared which you can ensure via
ReorderBufferTxnIsPrepared() (considering you have a proper check in
that function), it should not require skipping the transaction in
Abort. One way it could happen is if you clean up the ReorderBufferTxn
in Prepare which you were doing in earlier version of patch which I
pointed out was wrong, if you have changed that then I don't know why
it could fail, may be someplace else during prepare the patch is
freeing it. Just check that.

I will need to relook
this logic again in a future patch.

No problem. I think you can handle the other comments and then we can
come back to this and you might want to share the exact details of the
test (may be a narrow down version of the original test) and I or
someone else might be able to help you with that.

--
With Regards,
Amit Kapila.

#33Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#32)
4 attachment(s)

On Wed, Sep 23, 2020 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

No problem. I think you can handle the other comments and then we can
come back to this and you might want to share the exact details of the
test (may be a narrow down version of the original test) and I or
someone else might be able to help you with that.

--
With Regards,
Amit Kapila.

I have added a new patch for supporting 2 phase commit semantics in
the streaming APIs for the logical decoding plugins. I have added 3
APIs
1. stream_prepare
2. stream_commit_prepared
3. stream_abort_prepared

I have also added the support for the new APIs in test_decoding
plugin. I have not yet added it to pgoutpout.

I have also added a fix for the error I saw while calling
ReorderBufferCleanupTXN as part of FinishPrepared handling. As a
result I have removed the function I added earlier,
ReorderBufferCleanupPreparedTXN.
Please have a look at the new changes and let me know what you think.

I will continue to look at:

1. Remove snapshots on prepare truncate.
2. Bug seen while abort of prepared transaction, the prepared flag is
lost, and not able to make out that it was a previously prepared
transaction.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v6-0001-Support-decoding-of-two-phase-transactions.patchapplication/octet-stream; name=v6-0001-Support-decoding-of-two-phase-transactions.patch
v6-0002-Tap-test-to-test-concurrent-aborts-during-2-phase.patchapplication/octet-stream; name=v6-0002-Tap-test-to-test-concurrent-aborts-during-2-phase.patch
v6-0004-Support-two-phase-commits-in-streaming-mode-in-lo.patchapplication/octet-stream; name=v6-0004-Support-two-phase-commits-in-streaming-mode-in-lo.patch
v6-0003-pgoutput-output-plugin-support-for-logical-decodi.patchapplication/octet-stream; name=v6-0003-pgoutput-output-plugin-support-for-logical-decodi.patch
#34Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#33)

On Mon, Sep 28, 2020 at 1:13 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Wed, Sep 23, 2020 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have added a new patch for supporting 2 phase commit semantics in
the streaming APIs for the logical decoding plugins. I have added 3
APIs
1. stream_prepare
2. stream_commit_prepared
3. stream_abort_prepared

I have also added the support for the new APIs in test_decoding
plugin. I have not yet added it to pgoutpout.

I have also added a fix for the error I saw while calling
ReorderBufferCleanupTXN as part of FinishPrepared handling. As a
result I have removed the function I added earlier,
ReorderBufferCleanupPreparedTXN.

Can you explain what was the problem and how you fixed it?

Please have a look at the new changes and let me know what you think.

I will continue to look at:

1. Remove snapshots on prepare truncate.
2. Bug seen while abort of prepared transaction, the prepared flag is
lost, and not able to make out that it was a previously prepared
transaction.

And the support of new APIs in pgoutput, right?

--
With Regards,
Amit Kapila.

#35Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#34)

On Mon, Sep 28, 2020 at 6:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Sep 28, 2020 at 1:13 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Wed, Sep 23, 2020 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have added a new patch for supporting 2 phase commit semantics in
the streaming APIs for the logical decoding plugins. I have added 3
APIs
1. stream_prepare
2. stream_commit_prepared
3. stream_abort_prepared

I have also added the support for the new APIs in test_decoding
plugin. I have not yet added it to pgoutpout.

I have also added a fix for the error I saw while calling
ReorderBufferCleanupTXN as part of FinishPrepared handling. As a
result I have removed the function I added earlier,
ReorderBufferCleanupPreparedTXN.

Can you explain what was the problem and how you fixed it?

When I added the changes for cleaning up tuplecids in
ReorderBufferTruncateTXN, I was not deleting it from the list
(dlist_delete), only calling ReorderBufferReturnChange to free
memory. This logic was copied from ReorderBufferCleanupTXN, there the
lists were all cleaned up in the end, so was not present in each list
cleanup logic.

Please have a look at the new changes and let me know what you think.

I will continue to look at:

1. Remove snapshots on prepare truncate.
2. Bug seen while abort of prepared transaction, the prepared flag is
lost, and not able to make out that it was a previously prepared
transaction.

And the support of new APIs in pgoutput, right?

Yes, that also.

regards,
Ajin Cherian
Fujitsu Australia

#36Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#32)

On Wed, Sep 23, 2020 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

If the transaction is prepared which you can ensure via
ReorderBufferTxnIsPrepared() (considering you have a proper check in
that function), it should not require skipping the transaction in
Abort. One way it could happen is if you clean up the ReorderBufferTxn
in Prepare which you were doing in earlier version of patch which I
pointed out was wrong, if you have changed that then I don't know why
it could fail, may be someplace else during prepare the patch is
freeing it. Just check that.

I had a look at this problem. The problem happens when decoding is
done after a prepare but before the corresponding rollback
prepared/commit prepared.
For eg:

Begin;
<change 1>
<change 2>
PREPARE TRANSACTION '<prepare#1>';
SELECT data FROM pg_logical_slot_get_changes(...);
:
:
ROLLBACK PREPARED '<prepare#1>';
SELECT data FROM pg_logical_slot_get_changes(...);

Since the prepare is consumed in the first call to
pg_logical_slot_get_changes, subsequently when it is encountered in
the second call, it is skipped (as already decoded) in DecodePrepare
and the txn->flags are not set to
reflect the fact that it was prepared. The same behaviour is seen when
it is commit prepared after the original prepare was consumed.
Initially I was thinking about the following approach to fix it in DecodePrepare
Approach 1:
1. Break the big Skip check in DecodePrepare into 2 parts.
Return if the following conditions are true:
If (parsed->dbId != InvalidOid && parsed->dbId !=
ctx->slot->data.database) ||
ctx->fast_forward || FilterByOrigin(ctx, origin_id))

2. Check If this condition is true:
SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr)

Then this means we are skipping because this has already
been decoded, then instead of returning, call a new function
ReorderBufferMarkPrepare() which will only update the flags in the txn
to indicate that the transaction is prepared
Then later in DecodeAbort or DecodeCommit, we can confirm
that the transaction has been Prepared by checking if the flag is set
and call ReorderBufferFinishPrepared appropriately.

But then, thinking about this some more, I thought of a second approach.
Approach 2:
If the only purpose of all this was to differentiate between
Abort vs Rollback Prepared and Commit vs Commit Prepared, then we dont
need this. We already know the exact operation
in DecodeXactOp and can differentiate there. We only
overloaded DecodeAbort and DecodeCommit for convenience, we can always
call these functions with an extra flag to denote that we are either
commit or aborting a
previously prepared transaction and call
ReorderBufferFinishPrepared accordingly.

Let me know your thoughts on these two approaches or any other
suggestions on this.

regards,
Ajin Cherian
Fujitsu Australia

#37Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#36)

On Tue, Sep 29, 2020 at 5:08 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Wed, Sep 23, 2020 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

If the transaction is prepared which you can ensure via
ReorderBufferTxnIsPrepared() (considering you have a proper check in
that function), it should not require skipping the transaction in
Abort. One way it could happen is if you clean up the ReorderBufferTxn
in Prepare which you were doing in earlier version of patch which I
pointed out was wrong, if you have changed that then I don't know why
it could fail, may be someplace else during prepare the patch is
freeing it. Just check that.

I had a look at this problem. The problem happens when decoding is
done after a prepare but before the corresponding rollback
prepared/commit prepared.
For eg:

Begin;
<change 1>
<change 2>
PREPARE TRANSACTION '<prepare#1>';
SELECT data FROM pg_logical_slot_get_changes(...);
:
:
ROLLBACK PREPARED '<prepare#1>';
SELECT data FROM pg_logical_slot_get_changes(...);

Since the prepare is consumed in the first call to
pg_logical_slot_get_changes, subsequently when it is encountered in
the second call, it is skipped (as already decoded) in DecodePrepare
and the txn->flags are not set to
reflect the fact that it was prepared. The same behaviour is seen when
it is commit prepared after the original prepare was consumed.
Initially I was thinking about the following approach to fix it in DecodePrepare
Approach 1:
1. Break the big Skip check in DecodePrepare into 2 parts.
Return if the following conditions are true:
If (parsed->dbId != InvalidOid && parsed->dbId !=
ctx->slot->data.database) ||
ctx->fast_forward || FilterByOrigin(ctx, origin_id))

2. Check If this condition is true:
SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr)

Then this means we are skipping because this has already
been decoded, then instead of returning, call a new function
ReorderBufferMarkPrepare() which will only update the flags in the txn
to indicate that the transaction is prepared
Then later in DecodeAbort or DecodeCommit, we can confirm
that the transaction has been Prepared by checking if the flag is set
and call ReorderBufferFinishPrepared appropriately.

But then, thinking about this some more, I thought of a second approach.
Approach 2:
If the only purpose of all this was to differentiate between
Abort vs Rollback Prepared and Commit vs Commit Prepared, then we dont
need this. We already know the exact operation
in DecodeXactOp and can differentiate there. We only
overloaded DecodeAbort and DecodeCommit for convenience, we can always
call these functions with an extra flag to denote that we are either
commit or aborting a
previously prepared transaction and call
ReorderBufferFinishPrepared accordingly.

The second approach sounds better but you can see if there is not much
you want to reuse from DecodeCommit/DecodeAbort then you can even
write new functions DecodeCommitPrepared/DecodeAbortPrepared. OTOH, if
there is a common code among them then passing the flag would be a
better way.

--
With Regards,
Amit Kapila.

#38Dilip Kumar
Dilip Kumar
dilipbalaut@gmail.com
In reply to: Ajin Cherian (#33)

On Mon, Sep 28, 2020 at 1:13 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Wed, Sep 23, 2020 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

No problem. I think you can handle the other comments and then we can
come back to this and you might want to share the exact details of the
test (may be a narrow down version of the original test) and I or
someone else might be able to help you with that.

--
With Regards,
Amit Kapila.

I have added a new patch for supporting 2 phase commit semantics in
the streaming APIs for the logical decoding plugins. I have added 3
APIs
1. stream_prepare
2. stream_commit_prepared
3. stream_abort_prepared

I have also added the support for the new APIs in test_decoding
plugin. I have not yet added it to pgoutpout.

I have also added a fix for the error I saw while calling
ReorderBufferCleanupTXN as part of FinishPrepared handling. As a
result I have removed the function I added earlier,
ReorderBufferCleanupPreparedTXN.
Please have a look at the new changes and let me know what you think.

I will continue to look at:

1. Remove snapshots on prepare truncate.
2. Bug seen while abort of prepared transaction, the prepared flag is
lost, and not able to make out that it was a previously prepared
transaction.

I have started looking into you latest patches, as of now I have a
few comments.

v6-0001

@@ -1987,7 +2072,7 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
prev_lsn = change->lsn;

  /* Set the current xid to detect concurrent aborts. */
- if (streaming)
+ if (streaming || rbtxn_prepared(change->txn))
  {
  curtxn = change->txn;
  SetupCheckXidLive(curtxn->xid);
@@ -2249,7 +2334,6 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
  break;
  }
  }
-

For streaming transaction we need to check the xid everytime because
there could concurrent a subtransaction abort, but
for two-phase we don't need to call SetupCheckXidLive everytime,
because we are sure that transaction is going to be
the same throughout the processing.

Apart from this I have also noticed a couple of cosmetic changes

+ {
+ xl_xact_parsed_prepare parsed;
+ xl_xact_prepare *xlrec;
+ /* check that output plugin is capable of twophase decoding */
+ if (!ctx->enable_twophase)
+ {
+ ReorderBufferProcessXid(reorder, XLogRecGetXid(r), buf->origptr);
+ break;
+ }

One blank line after variable declations

- /* remove potential on-disk data, and deallocate */
+    /*
+     * remove potential on-disk data, and deallocate.
+     *
+     * We remove it even for prepared transactions (GID is enough to
+     * commit/abort those later).
+     */
+
  ReorderBufferCleanupTXN(rb, txn);

Comment not aligned properly

v6-0003

+LookupGXact(const char *gid)
+{
+ int i;
+
+ LWLockAcquire(TwoPhaseStateLock, LW_EXCLUSIVE);
+
+ for (i = 0; i < TwoPhaseState->numPrepXacts; i++)
+ {
+ GlobalTransaction gxact = TwoPhaseState->prepXacts[i];

I think we should take LW_SHARED lowck here no?

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#39Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#38)

On Tue, Sep 29, 2020 at 8:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

I have started looking into you latest patches, as of now I have a
few comments.

v6-0001

@@ -1987,7 +2072,7 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
prev_lsn = change->lsn;

/* Set the current xid to detect concurrent aborts. */
- if (streaming)
+ if (streaming || rbtxn_prepared(change->txn))
{
curtxn = change->txn;
SetupCheckXidLive(curtxn->xid);
@@ -2249,7 +2334,6 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
break;
}
}
-

For streaming transaction we need to check the xid everytime because
there could concurrent a subtransaction abort, but
for two-phase we don't need to call SetupCheckXidLive everytime,
because we are sure that transaction is going to be
the same throughout the processing.

While decoding transactions at 'prepare' time there could be multiple
sub-transactions like in the case below. Won't that be impacted if we
follow your suggestion here?

postgres=# Begin;
BEGIN
postgres=*# insert into t1 values(1,'aaa');
INSERT 0 1
postgres=*# savepoint s1;
SAVEPOINT
postgres=*# insert into t1 values(2,'aaa');
INSERT 0 1
postgres=*# savepoint s2;
SAVEPOINT
postgres=*# insert into t1 values(3,'aaa');
INSERT 0 1
postgres=*# Prepare Transaction 'foo';
PREPARE TRANSACTION

--
With Regards,
Amit Kapila.

#40Dilip Kumar
Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#39)

On Wed, Sep 30, 2020 at 2:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Sep 29, 2020 at 8:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

I have started looking into you latest patches, as of now I have a
few comments.

v6-0001

@@ -1987,7 +2072,7 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
prev_lsn = change->lsn;

/* Set the current xid to detect concurrent aborts. */
- if (streaming)
+ if (streaming || rbtxn_prepared(change->txn))
{
curtxn = change->txn;
SetupCheckXidLive(curtxn->xid);
@@ -2249,7 +2334,6 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
break;
}
}
-

For streaming transaction we need to check the xid everytime because
there could concurrent a subtransaction abort, but
for two-phase we don't need to call SetupCheckXidLive everytime,
because we are sure that transaction is going to be
the same throughout the processing.

While decoding transactions at 'prepare' time there could be multiple
sub-transactions like in the case below. Won't that be impacted if we
follow your suggestion here?

postgres=# Begin;
BEGIN
postgres=*# insert into t1 values(1,'aaa');
INSERT 0 1
postgres=*# savepoint s1;
SAVEPOINT
postgres=*# insert into t1 values(2,'aaa');
INSERT 0 1
postgres=*# savepoint s2;
SAVEPOINT
postgres=*# insert into t1 values(3,'aaa');
INSERT 0 1
postgres=*# Prepare Transaction 'foo';
PREPARE TRANSACTION

But once we prepare the transaction, we can not rollback individual
subtransaction. We can only rollback the main transaction so instead
of setting individual subxact as CheckXidLive, we can just set the
main XID so no need to check on every command. Just set it before
start processing.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#41Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#40)

On Wed, Sep 30, 2020 at 2:46 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Wed, Sep 30, 2020 at 2:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Sep 29, 2020 at 8:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

I have started looking into you latest patches, as of now I have a
few comments.

v6-0001

@@ -1987,7 +2072,7 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
prev_lsn = change->lsn;

/* Set the current xid to detect concurrent aborts. */
- if (streaming)
+ if (streaming || rbtxn_prepared(change->txn))
{
curtxn = change->txn;
SetupCheckXidLive(curtxn->xid);
@@ -2249,7 +2334,6 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
break;
}
}
-

For streaming transaction we need to check the xid everytime because
there could concurrent a subtransaction abort, but
for two-phase we don't need to call SetupCheckXidLive everytime,
because we are sure that transaction is going to be
the same throughout the processing.

While decoding transactions at 'prepare' time there could be multiple
sub-transactions like in the case below. Won't that be impacted if we
follow your suggestion here?

postgres=# Begin;
BEGIN
postgres=*# insert into t1 values(1,'aaa');
INSERT 0 1
postgres=*# savepoint s1;
SAVEPOINT
postgres=*# insert into t1 values(2,'aaa');
INSERT 0 1
postgres=*# savepoint s2;
SAVEPOINT
postgres=*# insert into t1 values(3,'aaa');
INSERT 0 1
postgres=*# Prepare Transaction 'foo';
PREPARE TRANSACTION

But once we prepare the transaction, we can not rollback individual
subtransaction.

Sure but Rollback can come before prepare like in the case below which
will appear as concurrent abort (assume there is some DDL which
changes the table before the Rollback statement) because it has
already been done by the backend and that need to be caught by this
mechanism only.

Begin;
insert into t1 values(1,'aaa');
savepoint s1;
insert into t1 values(2,'aaa');
savepoint s2;
insert into t1 values(3,'aaa');
Rollback to savepoint s2;
insert into t1 values(4,'aaa');
Prepare Transaction 'foo';

--
With Regards,
Amit Kapila.

#42Dilip Kumar
Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#41)

On Wed, Sep 30, 2020 at 3:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Sep 30, 2020 at 2:46 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Wed, Sep 30, 2020 at 2:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Sep 29, 2020 at 8:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

I have started looking into you latest patches, as of now I have a
few comments.

v6-0001

@@ -1987,7 +2072,7 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
prev_lsn = change->lsn;

/* Set the current xid to detect concurrent aborts. */
- if (streaming)
+ if (streaming || rbtxn_prepared(change->txn))
{
curtxn = change->txn;
SetupCheckXidLive(curtxn->xid);
@@ -2249,7 +2334,6 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
break;
}
}
-

For streaming transaction we need to check the xid everytime because
there could concurrent a subtransaction abort, but
for two-phase we don't need to call SetupCheckXidLive everytime,
because we are sure that transaction is going to be
the same throughout the processing.

While decoding transactions at 'prepare' time there could be multiple
sub-transactions like in the case below. Won't that be impacted if we
follow your suggestion here?

postgres=# Begin;
BEGIN
postgres=*# insert into t1 values(1,'aaa');
INSERT 0 1
postgres=*# savepoint s1;
SAVEPOINT
postgres=*# insert into t1 values(2,'aaa');
INSERT 0 1
postgres=*# savepoint s2;
SAVEPOINT
postgres=*# insert into t1 values(3,'aaa');
INSERT 0 1
postgres=*# Prepare Transaction 'foo';
PREPARE TRANSACTION

But once we prepare the transaction, we can not rollback individual
subtransaction.

Sure but Rollback can come before prepare like in the case below which
will appear as concurrent abort (assume there is some DDL which
changes the table before the Rollback statement) because it has
already been done by the backend and that need to be caught by this
mechanism only.

Begin;
insert into t1 values(1,'aaa');
savepoint s1;
insert into t1 values(2,'aaa');
savepoint s2;
insert into t1 values(3,'aaa');
Rollback to savepoint s2;
insert into t1 values(4,'aaa');
Prepare Transaction 'foo';

If we are streaming on the prepare that means we must have decoded
that rollback WAL which means we should have removed the
ReorderBufferTXN for those subxact.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#43Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#42)

On Wed, Sep 30, 2020 at 3:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Wed, Sep 30, 2020 at 3:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Sep 30, 2020 at 2:46 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Wed, Sep 30, 2020 at 2:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Sep 29, 2020 at 8:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

I have started looking into you latest patches, as of now I have a
few comments.

v6-0001

@@ -1987,7 +2072,7 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
prev_lsn = change->lsn;

/* Set the current xid to detect concurrent aborts. */
- if (streaming)
+ if (streaming || rbtxn_prepared(change->txn))
{
curtxn = change->txn;
SetupCheckXidLive(curtxn->xid);
@@ -2249,7 +2334,6 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
break;
}
}
-

For streaming transaction we need to check the xid everytime because
there could concurrent a subtransaction abort, but
for two-phase we don't need to call SetupCheckXidLive everytime,
because we are sure that transaction is going to be
the same throughout the processing.

While decoding transactions at 'prepare' time there could be multiple
sub-transactions like in the case below. Won't that be impacted if we
follow your suggestion here?

postgres=# Begin;
BEGIN
postgres=*# insert into t1 values(1,'aaa');
INSERT 0 1
postgres=*# savepoint s1;
SAVEPOINT
postgres=*# insert into t1 values(2,'aaa');
INSERT 0 1
postgres=*# savepoint s2;
SAVEPOINT
postgres=*# insert into t1 values(3,'aaa');
INSERT 0 1
postgres=*# Prepare Transaction 'foo';
PREPARE TRANSACTION

But once we prepare the transaction, we can not rollback individual
subtransaction.

Sure but Rollback can come before prepare like in the case below which
will appear as concurrent abort (assume there is some DDL which
changes the table before the Rollback statement) because it has
already been done by the backend and that need to be caught by this
mechanism only.

Begin;
insert into t1 values(1,'aaa');
savepoint s1;
insert into t1 values(2,'aaa');
savepoint s2;
insert into t1 values(3,'aaa');
Rollback to savepoint s2;
insert into t1 values(4,'aaa');
Prepare Transaction 'foo';

If we are streaming on the prepare that means we must have decoded
that rollback WAL which means we should have removed the
ReorderBufferTXN for those subxact.

Okay, valid point. We can avoid setting it for each sub-transaction in
that case but OTOH even if we allow to set it there shouldn't be any
bug.

--
With Regards,
Amit Kapila.

#44Dilip Kumar
Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#43)

On Wed, Sep 30, 2020 at 3:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Sep 30, 2020 at 3:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Wed, Sep 30, 2020 at 3:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Sep 30, 2020 at 2:46 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Wed, Sep 30, 2020 at 2:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Sep 29, 2020 at 8:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:

I have started looking into you latest patches, as of now I have a
few comments.

v6-0001

@@ -1987,7 +2072,7 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
prev_lsn = change->lsn;

/* Set the current xid to detect concurrent aborts. */
- if (streaming)
+ if (streaming || rbtxn_prepared(change->txn))
{
curtxn = change->txn;
SetupCheckXidLive(curtxn->xid);
@@ -2249,7 +2334,6 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
break;
}
}
-

For streaming transaction we need to check the xid everytime because
there could concurrent a subtransaction abort, but
for two-phase we don't need to call SetupCheckXidLive everytime,
because we are sure that transaction is going to be
the same throughout the processing.

While decoding transactions at 'prepare' time there could be multiple
sub-transactions like in the case below. Won't that be impacted if we
follow your suggestion here?

postgres=# Begin;
BEGIN
postgres=*# insert into t1 values(1,'aaa');
INSERT 0 1
postgres=*# savepoint s1;
SAVEPOINT
postgres=*# insert into t1 values(2,'aaa');
INSERT 0 1
postgres=*# savepoint s2;
SAVEPOINT
postgres=*# insert into t1 values(3,'aaa');
INSERT 0 1
postgres=*# Prepare Transaction 'foo';
PREPARE TRANSACTION

But once we prepare the transaction, we can not rollback individual
subtransaction.

Sure but Rollback can come before prepare like in the case below which
will appear as concurrent abort (assume there is some DDL which
changes the table before the Rollback statement) because it has
already been done by the backend and that need to be caught by this
mechanism only.

Begin;
insert into t1 values(1,'aaa');
savepoint s1;
insert into t1 values(2,'aaa');
savepoint s2;
insert into t1 values(3,'aaa');
Rollback to savepoint s2;
insert into t1 values(4,'aaa');
Prepare Transaction 'foo';

If we are streaming on the prepare that means we must have decoded
that rollback WAL which means we should have removed the
ReorderBufferTXN for those subxact.

Okay, valid point. We can avoid setting it for each sub-transaction in
that case but OTOH even if we allow to set it there shouldn't be any
bug.

Right, there will not be any bug, just an optimization.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#45Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Ajin Cherian (#33)
1 attachment(s)

Hello Ajin.

I have done some review of the v6 patches.

I had some difficulty replying my review comments to the OSS list, so
I am putting them as an attachment here.

Kind Regards,
Peter Smith
Fujitsu Australia

Attachments:

OSS-List-v6-review-comments-20201006.txttext/plain; charset=US-ASCII; name=OSS-List-v6-review-comments-20201006.txt
#46Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#45)

Hello Ajin.

I have gone through the v6 patch changes and have a list of review
comments below.

Apologies for the length of this email - I know that many of the
following comments are trivial, but I figured I should either just
ignore everything cosmetic, or list everything regardless. I chose the
latter.

There may be some duplication where the same review comment is written
for multiple files and/or where the same file is in your multiple
patches.

Kind Regards.
Peter Smith
Fujitsu Australia

[BEGIN]

==========
Patch V6-0001, File: contrib/test_decoding/expected/prepared.out (so
prepared.sql also)
==========

COMMENT
Line 30 - The INSERT INTO test_prepared1 VALUES (2); is kind of
strange because it is not really part of the prior test nor the
following test. Maybe it would be better to have a comment describing
the purpose of this isolated INSERT and to also consume the data from
the slot so it does not get jumbled with the data of the following
(abort) test.

;

COMMENT
Line 53 - Same comment for this test INSERT INTO test_prepared1 VALUES
(4); It kind of has nothing really to do with either the prior (abort)
test nor the following (ddl) test.

;

COMMENT
Line 60 - Seems to check which locks are held for the test_prepared_1
table while the transaction is in progress. Maybe it would be better
to have more comments describing what is expected here and why.

;

COMMENT
Line 88 - There is a comment in the test saying "-- We should see '7'
before '5' in our results since it commits first." but I did not see
any test code that actually verifies that happens.

;

QUESTION
Line 120 - I did not really understand the SQL checking the pg_class.
I expected this would be checking table 'test_prepared1' instead. Can
you explain it?
SELECT 'pg_class' AS relation, locktype, mode
FROM pg_locks
WHERE locktype = 'relation'
AND relation = 'pg_class'::regclass;
relation | locktype | mode
----------+----------+------
(0 rows)

;

QUESTION
Line 139 - SET statement_timeout = '1s'; is 1 seconds short enough
here for this test, or might it be that these statements would be
completed in less than one seconds anyhow?

;

QUESTION
Line 163 - How is this testing a SAVEPOINT? Or is it only to check
that the SAVEPOINT command is not part of the replicated changes?

;

COMMENT
Line 175 - Missing underscore in comment. Code requires also underscore:
"nodecode" --> "_nodecode"

==========
Patch V6-0001, File: contrib/test_decoding/test_decoding.c
==========

COMMENT
Line 43
@@ -36,6 +40,7 @@ typedef struct
bool skip_empty_xacts;
bool xact_wrote_changes;
bool only_local;
+ TransactionId check_xid; /* track abort of this txid */
} TestDecodingData;

The "check_xid" seems a meaningless name. Check what?
IIUC maybe should be something like "check_xid_aborted"

;

COMMENT
Line 105
@ -88,6 +93,19 @@ static void
pg_decode_stream_truncate(LogicalDecodingContext *ctx,
 ReorderBufferTXN *txn,
 int nrelations, Relation relations[],
 ReorderBufferChange *change);
+static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,

Remove extra blank line after these functions

;

COMMENT
Line 149
@@ -116,6 +134,11 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
 cb->stream_change_cb = pg_decode_stream_change;
 cb->stream_message_cb = pg_decode_stream_message;
 cb->stream_truncate_cb = pg_decode_stream_truncate;
+ cb->filter_prepare_cb = pg_decode_filter_prepare;
+ cb->prepare_cb = pg_decode_prepare_txn;
+ cb->commit_prepared_cb = pg_decode_commit_prepared_txn;
+ cb->abort_prepared_cb = pg_decode_abort_prepared_txn;
+
 }

There is a confusing mix of terminology where sometimes things are
referred as ROLLBACK/rollback and other times apparently the same
operation is referred as ABORT/abort. I do not know the root cause of
this mixture. IIUC maybe the internal functions and protocol generally
use the term "abort", whereas the SQL syntax is "ROLLBACK"... but
where those two terms collide in the middle it gets quite confusing.

At least I thought the names of the "callbacks" which get exposed to
the user (e.g. in the help) might be better if they would match the
SQL.
"abort_prepared_cb" --> "rollback_prepared_db"

There are similar review comments like this below where the
alternating terms caused me some confusion.

~

Also, Remove the extra blank line before the end of the function.

;

COMMENT
Line 267
@ -227,6 +252,42 @@ pg_decode_startup(LogicalDecodingContext *ctx,
OutputPluginOptions *opt,
 errmsg("could not parse value \"%s\" for parameter \"%s\"",
 strVal(elem->arg), elem->defname)));
 }
+ else if (strcmp(elem->defname, "two-phase-commit") == 0)
+ {
+ if (elem->arg == NULL)
+ continue;

IMO the "check-xid" code might be better rearranged so the NULL check
is first instead of if/else.
e.g.
if (elem->arg == NULL)
ereport(FATAL,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("check-xid needs an input value")));
~

Also, is it really supposed to be FATAL instead or ERROR. That is not
the same as the other surrounding code.

;

COMMENT
Line 296
if (data->check_xid <= 0)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("Specify positive value for parameter \"%s\","
" you specified \"%s\"",
elem->defname, strVal(elem->arg))));

The code checking for <= 0 seems over-complicated. Because conversion
was using strtoul() I fail to see how this can ever be < 0. Wouldn't
it be easier to simply test the result of the strtoul() function?

BEFORE: if (errno == EINVAL || errno == ERANGE)
AFTER: if (data->check_xid == 0)

~

Also, should this be FATAL? Everything else similar is ERROR.

;

COMMENT
(general)
I don't recall seeing any of these decoding options (e.g.
"two-phase-commit", "check-xid") documented anywhere.
So how can a user even know these options exist so they can use them?
Perhaps options should be described on this page?
https://www.postgresql.org/docs/13/functions-admin.html#FUNCTIONS-REPLICATION

;

COMMENT
(general)
"check-xid" is a meaningless option name. Maybe something like
"checked-xid-aborted" is more useful?
Suggest changing the member, the option, and the error messages to
match some better name.

;

COMMENT
Line 314
@@ -238,6 +299,7 @@ pg_decode_startup(LogicalDecodingContext *ctx,
OutputPluginOptions *opt,
}

ctx->streaming &= enable_streaming;
+ ctx->enable_twophase &= enable_2pc;
}

The "ctx->enable_twophase" is inconsistent naming with the
"ctx->streaming" member.
"enable_twophase" --> "twophase"

;

COMMENT
Line 374
@@ -297,6 +359,94 @@ pg_decode_commit_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}

+
+/*
+ * Filter out two-phase transactions.
+ *
+ * Each plugin can implement its own filtering logic. Here
+ * we demonstrate a simple logic by checking the GID. If the
+ * GID contains the "_nodecode" substring, then we filter
+ * it out.
+ */
+static bool
+pg_decode_filter_prepare(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,

Remove the extra preceding blank line.

~

I did not find anything in the help about "_nodecode". Should it be
there or is this deliberately not documented feature?

;

QUESTION
Line 440
+pg_decode_abort_prepared_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,

Is this a wrong comment
"ABORT PREPARED" --> "ROLLBACK PREPARED" ??

;

COMMENT
Line 620
@@ -455,6 +605,22 @@ pg_decode_change(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
}
data->xact_wrote_changes = true;

+ /* if check_xid is specified */
+ if (TransactionIdIsValid(data->check_xid))
+ {
+ elog(LOG, "waiting for %u to abort", data->check_xid);
+ while (TransactionIdIsInProgress(dat

The check_xid seems a meaningless name, and the comment "/* if
check_xid is specified */" was not helpful either.
IIUC purpose of this is to check that the nominated xid always is rolled back.
So the appropriate name may be more like "check-xid-aborted".

;

==========
Patch V6-0001, File: doc/src/sgml/logicaldecoding.sgml
==========

COMMENT/QUESTION
Section 48.6.1
@ -387,6 +387,10 @@ typedef struct OutputPluginCallbacks
 LogicalDecodeTruncateCB truncate_cb;
 LogicalDecodeCommitCB commit_cb;
 LogicalDecodeMessageCB message_cb;
+ LogicalDecodeFilterPrepareCB filter_prepare_cb;

Confused by the mixing of terminologies "abort" and "rollback".
Why is it LogicalDecodeAbortPreparedCB instead of
LogicalDecodeRollbackPreparedCB?
Why is it abort_prepared_cb instead of rollback_prepared_cb;?

I thought everything the user sees should be ROLLBACK/rollback (like
the SQL) regardless of what the internal functions might be called.

;

COMMENT
Section 48.6.1
The begin_cb, change_cb and commit_cb callbacks are required, while
startup_cb, filter_by_origin_cb, truncate_cb, and shutdown_cb are
optional. If truncate_cb is not set but a TRUNCATE is to be decoded,
the action will be ignored.

The 1st paragraph beneath the typedef does not mention the newly added
callbacks to say if they are required or optional.

;

COMMENT
Section 48.6.4.5
Section 48.6.4.6
Section 48.6.4.7
@@ -578,6 +588,55 @@ typedef void (*LogicalDecodeCommitCB) (struct
LogicalDecodingContext *ctx,
</para>
</sect3>

+ <sect3 id="logicaldecoding-output-plugin-prepare">
+    <sect3 id="logicaldecoding-output-plugin-commit-prepared">
+    <sect3 id="logicaldecoding-output-plugin-abort-prepared">
+<programlisting>

The wording and titles are a bit backwards compared to the others.
e.g. previously was "Transaction Begin" (not "Begin Transaction") and
"Transaction End" (not "End Transaction").

So for consistently following the existing IMO should change these new
titles (and wording) to:
- "Commit Prepared Transaction Callback" --> "Transaction Commit
Prepared Callback"
- "Rollback Prepared Transaction Callback" --> "Transaction Rollback
Prepared Callback"
- "whenever a commit prepared transaction has been decoded" -->
"whenever a transaction commit prepared has been decoded"
- "whenever a rollback prepared transaction has been decoded." -->
"whenever a transaction rollback prepared has been decoded."

;

==========
Patch V6-0001, File: src/backend/replication/logical/decode.c
==========

COMMENT
Line 74
@@ -70,6 +70,9 @@ static void DecodeCommit(LogicalDecodingContext
*ctx, XLogRecordBuffer *buf,
 xl_xact_parsed_commit *parsed, TransactionId xid);
 static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
 xl_xact_parsed_abort *parsed, TransactionId xid);
+static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
+ xl_xact_parsed_prepare * parsed);

The 2nd line of DecodePrepare is misaligned by one space.

;

COMMENT
Line 321
@@ -312,17 +315,34 @@ DecodeXactOp(LogicalDecodingContext *ctx,
XLogRecordBuffer *buf)
 }
 break;
 case XLOG_XACT_PREPARE:
+ {
+ xl_xact_parsed_prepare parsed;
+ xl_xact_prepare *xlrec;
+ /* check that output plugin is capable of twophase decoding */

"twophase" --> "two-phase"

~

Also, add a blank line after the declarations.

;

==========
Patch V6-0001, File: src/backend/replication/logical/logical.c
==========

COMMENT
Line 249
@@ -225,6 +237,19 @@ StartupDecodingContext(List *output_plugin_options,
(ctx->callbacks.stream_message_cb != NULL) ||
(ctx->callbacks.stream_truncate_cb != NULL);

+ /*
+ * To support two phase logical decoding, we require
prepare/commit-prepare/abort-prepare
+ * callbacks. The filter-prepare callback is optional. We however
enable two phase logical
+ * decoding when at least one of the methods is enabled so that we
can easily identify
+ * missing methods.

The terminology is generally well known as "two-phase" (with the
hyphen) https://en.wikipedia.org/wiki/Two-phase_commit_protocol so
let's be consistent for all the patch code comments. Please search the
code and correct this in all places, even where I might have missed to
identify it.

"two phase" --> "two-phase"

;

COMMENT
Line 822
@@ -782,6 +807,111 @@ commit_cb_wrapper(ReorderBuffer *cache,
ReorderBufferTXN *txn,
}

 static void
+prepare_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr prepare_lsn)

"support 2 phase" --> "supports two-phase" in the comment

;

COMMENT
Line 844
Code condition seems strange and/or broken.
if (ctx->enable_twophase && ctx->callbacks.prepare_cb == NULL)
Because if the flag is null then this condition is skipped.
But then if the callback was also NULL then attempting to call it to
"do the actual work" will give NPE.

~

Also, I wonder should this check be the first thing in this function?
Because if it fails does it even make sense that all the errcallback
code was set up?
E.g errcallback.arg potentially is left pointing to a stack variable
on a stack that no longer exists.

;

COMMENT
Line 857
+commit_prepared_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,

"support 2 phase" --> "supports two-phase" in the comment

~

Also, Same potential trouble with the condition:
if (ctx->enable_twophase && ctx->callbacks.commit_prepared_cb == NULL)
Same as previously asked. Should this check be first thing in this function?

;

COMMENT
Line 892
+abort_prepared_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,

"support 2 phase" --> "supports two-phase" in the comment

~

Same potential trouble with the condition:
if (ctx->enable_twophase && ctx->callbacks.abort_prepared_cb == NULL)
Same as previously asked. Should this check be the first thing in this function?

;

COMMENT
Line 1013
@@ -858,6 +988,51 @@ truncate_cb_wrapper(ReorderBuffer *cache,
ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}

+static bool
+filter_prepare_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ TransactionId xid, const char *gid)

Fix wording in comment:
"twophase" --> "two-phase transactions"
"twophase transactions" --> "two-phase transactions"

==========
Patch V6-0001, File: src/backend/replication/logical/reorderbuffer.c
==========

COMMENT
Line 255
@@ -251,7 +251,8 @@ static Size
ReorderBufferRestoreChanges(ReorderBuffer *rb, ReorderBufferTXN *txn
 static void ReorderBufferRestoreChange(ReorderBuffer *rb,
ReorderBufferTXN *txn,
 char *change);
 static void ReorderBufferRestoreCleanup(ReorderBuffer *rb,
ReorderBufferTXN *txn);
-static void ReorderBufferTruncateTXN(ReorderBuffer *rb, ReorderBufferTXN *txn);
+static void ReorderBufferTruncateTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ bool txn_prepared);

The alignment is inconsistent. One more space needed before "bool txn_prepared"

;

COMMENT
Line 417
@@ -413,6 +414,11 @@ ReorderBufferReturnTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn)
}

 /* free data that's contained */
+ if (txn->gid != NULL)
+ {
+ pfree(txn->gid);
+ txn->gid = NULL;
+ }

Should add the blank link before this new code, as it was before.

;

COMMENT
Line 1564
@ -1502,12 +1561,14 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn)
}

 /*
- * Discard changes from a transaction (and subtransactions), after streaming
- * them. Keep the remaining info - transactions, tuplecids, invalidations and
- * snapshots.
+ * Discard changes from a transaction (and subtransactions), either
after streaming or
+ * after a PREPARE.

typo "snapshots.If" -> "snapshots. If"

;

COMMENT/QUESTION
Line 1590
@@ -1526,7 +1587,7 @@ ReorderBufferTruncateTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn)
Assert(rbtxn_is_known_subxact(subtxn));
Assert(subtxn->nsubtxns == 0);

- ReorderBufferTruncateTXN(rb, subtxn);
+ ReorderBufferTruncateTXN(rb, subtxn, txn_prepared);
 }

There are some code paths here I did not understand how they match the comments.
Because this function is recursive it seems that it may be called
where the 2nd parameter txn is a sub-transaction.

But then this seems at odds with some of the other code comments of
this function which are processing the txn without ever testing is it
really toplevel or not:

e.g. Line 1593 "/* cleanup changes in the toplevel txn */"
e.g. Line 1632 "They are always stored in the toplevel transaction."

;

COMMENT
Line 1644
@@ -1560,9 +1621,33 @@ ReorderBufferTruncateTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn)
 * about the toplevel xact (we send the XID in all messages), but we never
 * stream XIDs of empty subxacts.
 */
- if ((!txn->toptxn) || (txn->nentries_mem != 0))
+ if ((!txn_prepared) && ((!txn->toptxn) || (txn->nentries_mem != 0)))
 txn->txn_flags |= RBTXN_IS_STREAMED;

+ if (txn_prepared)

/* remove the change from it's containing list */
typo "it's" --> "its"

;

QUESTION
Line 1977
@@ -1880,7 +1965,7 @@ ReorderBufferResetTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
 ReorderBufferChange *specinsert)
 {
 /* Discard the changes that we just streamed */
- ReorderBufferTruncateTXN(rb, txn);
+ ReorderBufferTruncateTXN(rb, txn, false);

How do you know the 3rd parameter - i.e. txn_prepared - should be
hardwired false here?
e.g. I thought that maybe rbtxn_prepared(txn) can be true here.

;

COMMENT
Line 2345
@@ -2249,7 +2334,6 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
break;
}
}
-
/*

Looks like accidental blank line deletion. This should be put back how it was

;

COMMENT/QUESTION
Line 2374
@@ -2278,7 +2362,16 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
 }
 }
 else
- rb->commit(rb, txn, commit_lsn);
+ {
+ /*
+ * Call either PREPARE (for twophase transactions) or COMMIT
+ * (for regular ones).

"twophase" --> "two-phase"

~

Also, I was confused by the apparent assumption of exclusiveness of
streaming and 2PC...
e.g. what if streaming AND 2PC then it won't do rb->prepare()

;

QUESTION
Line 2424
@@ -2319,11 +2412,17 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
 */
 if (streaming)
 {
- ReorderBufferTruncateTXN(rb, txn);
+ ReorderBufferTruncateTXN(rb, txn, false);

/* Reset the CheckXidAlive */
CheckXidAlive = InvalidTransactionId;
}
+ else if (rbtxn_prepared(txn))

I was confused by the exclusiveness of streaming/2PC.
e.g. what if streaming AND 2PC at same time - how can you pass false
as 3rd param to ReorderBufferTruncateTXN?

;

COMMENT
Line 2463
@@ -2352,17 +2451,18 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,

 /*
 * The error code ERRCODE_TRANSACTION_ROLLBACK indicates a concurrent
- * abort of the (sub)transaction we are streaming. We need to do the
+ * abort of the (sub)transaction we are streaming or preparing. We
need to do the
 * cleanup and return gracefully on this error, see SetupCheckXidLive.
 */

"twoi phase" --> "two-phase"

;

QUESTIONS
Line 2482
@@ -2370,10 +2470,19 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
errdata = NULL;
curtxn->concurrent_abort = true;

- /* Reset the TXN so that it is allowed to stream remaining data. */
- ReorderBufferResetTXN(rb, txn, snapshot_now,
- command_id, prev_lsn,
- specinsert);
+ /* If streaming, reset the TXN so that it is allowed to stream
remaining data. */
+ if (streaming)

Re: /* If streaming, reset the TXN so that it is allowed to stream
remaining data. */
I was confused by the exclusiveness of streaming/2PC.
Is it not possible for streaming flags and rbtxn_prepared(txn) true at
the same time?

~

elog(LOG, "stopping decoding of %s (%u)",
txn->gid[0] != '\0'? txn->gid:"", txn->xid);

Is this a safe operation, or do you also need to test txn->gid is not NULL?

;

COMMENT
Line 2606
+ReorderBufferPrepare(ReorderBuffer *rb, TransactionId xid,

"twophase" --> "two-phase"

;

QUESTION
Line 2655
+ReorderBufferFinishPrepared(ReorderBuffer *rb, TransactionId xid,

"This is used to handle COMMIT/ABORT PREPARED"
Should that say "COMMIT/ROLLBACK PREPARED"?

;

COMMENT
Line 2668

"Anyways, 2PC transactions" --> "Anyway, two-phase transactions"

;

COMMENT
Line 2765
@@ -2495,7 +2731,13 @@ ReorderBufferAbort(ReorderBuffer *rb,
TransactionId xid, XLogRecPtr lsn)
/* cosmetic... */
txn->final_lsn = lsn;

- /* remove potential on-disk data, and deallocate */
+ /*
+ * remove potential on-disk data, and deallocate.
+ *

Remove the blank between the comment and code.

==========
Patch V6-0001, File: src/include/replication/logical.h
==========

COMMENT
Line 89

"two phase" -> "two-phase"

;

COMMENT
Line 89

For consistency with the previous member naming really the new member
should just be called "twophase" rather than "enable_twophase"

;

==========
Patch V6-0001, File: src/include/replication/output_plugin.h
==========

QUESTION
Line 106

As previously asked, why is the callback function/typedef referred as
AbortPrepared instead of RollbackPrepared?
It does not match the SQL and the function comment, and seems only to
add some unnecessary confusion.

;

==========
Patch V6-0001, File: src/include/replication/reorderbuffer.h
==========

QUESTION
Line 116
@@ -162,9 +163,13 @@ typedef struct ReorderBufferChange
 #define RBTXN_HAS_CATALOG_CHANGES 0x0001
 #define RBTXN_IS_SUBXACT 0x0002
 #define RBTXN_IS_SERIALIZED 0x0004
-#define RBTXN_IS_STREAMED 0x0008
-#define RBTXN_HAS_TOAST_INSERT 0x0010
-#define RBTXN_HAS_SPEC_INSERT 0x0020
+#define RBTXN_PREPARE 0x0008
+#define RBTXN_COMMIT_PREPARED 0x0010
+#define RBTXN_ROLLBACK_PREPARED 0x0020
+#define RBTXN_COMMIT 0x0040
+#define RBTXN_IS_STREAMED 0x0080
+#define RBTXN_HAS_TOAST_INSERT 0x0100
+#define RBTXN_HAS_SPEC_INSERT 0x0200

I was wondering why when adding new flags, some of the existing flag
masks were also altered.
I am assuming this is ok because they are never persisted but are only
used in the protocol (??)

;

COMMENT
Line 226
@@ -218,6 +223,15 @@ typedef struct ReorderBufferChange
((txn)->txn_flags & RBTXN_IS_STREAMED) != 0 \
)

+/* is this txn prepared? */
+#define rbtxn_prepared(txn) (txn->txn_flags & RBTXN_PREPARE)
+/* was this prepared txn committed in the meanwhile? */
+#define rbtxn_commit_prepared(txn) (txn->txn_flags & RBTXN_COMMIT_PREPARED)
+/* was this prepared txn aborted in the meanwhile? */
+#define rbtxn_rollback_prepared(txn) (txn->txn_flags & RBTXN_ROLLBACK_PREPARED)
+/* was this txn committed in the meanwhile? */
+#define rbtxn_commit(txn) (txn->txn_flags & RBTXN_COMMIT)
+

Probably all the "txn->txn_flags" here might be more safely written
with parentheses in the macro like "(txn)->txn_flags".

~

Also, Start all comments with capital. And what is the meaning "in the
meanwhile?"

;

COMMENT
Line 410
@@ -390,6 +407,39 @@ typedef void (*ReorderBufferCommitCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn,
XLogRecPtr commit_lsn);

The format is inconsistent with all other callback signatures here,
where the 1st arg was on the same line as the typedef.

;

COMMENT
Line 440-442

Excessive blank lines following this change?

;

COMMENT
Line 638
@@ -571,6 +631,15 @@ void
ReorderBufferXidSetCatalogChanges(ReorderBuffer *, TransactionId xid,
XLog
bool ReorderBufferXidHasCatalogChanges(ReorderBuffer *, TransactionId xid);
bool ReorderBufferXidHasBaseSnapshot(ReorderBuffer *, TransactionId xid);

+bool ReorderBufferPrepareNeedSkip(ReorderBuffer *rb, TransactionId xid,
+ const char *gid);
+bool ReorderBufferTxnIsPrepared(ReorderBuffer *rb, TransactionId xid,
+ const char *gid);
+void ReorderBufferPrepare(ReorderBuffer *rb, TransactionId xid,
+ XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
+ TimestampTz commit_time,
+ RepOriginId origin_id, XLogRecPtr origin_lsn,
+ char *gid);

Not aligned consistently with other function prototypes.

;

==========
Patch V6-0003, File: src/backend/access/transam/twophase.c
==========

COMMENT
Line 551
@@ -548,6 +548,37 @@ MarkAsPrepared(GlobalTransaction gxact, bool lock_held)
}

 /*
+ * LookupGXact
+ * Check if the prepared transaction with the given GID is around
+ */
+bool
+LookupGXact(const char *gid)

There is potential to refactor/simplify this code:
e.g.

bool
LookupGXact(const char *gid)
{
int i;
bool found = false;

LWLockAcquire(TwoPhaseStateLock, LW_EXCLUSIVE);
for (i = 0; i < TwoPhaseState->numPrepXacts; i++)
{
GlobalTransaction gxact = TwoPhaseState->prepXacts[i];
/* Ignore not-yet-valid GIDs */
if (gxact->valid && strcmp(gxact->gid, gid) == 0)
{
found = true;
break;
}
}
LWLockRelease(TwoPhaseStateLock);
return found;
}

;

==========
Patch V6-0003, File: src/backend/replication/logical/proto.c
==========

COMMENT
Line 86
@@ -72,12 +72,17 @@ logicalrep_read_begin(StringInfo in,
LogicalRepBeginData *begin_data)
*/
void
logicalrep_write_commit(StringInfo out, ReorderBufferTXN *txn,
- XLogRecPtr commit_lsn)

Since now the flags are used the code comment is wrong.
"/* send the flags field (unused for now) */"

;

COMMENT
Line 129
@ -106,6 +115,77 @@ logicalrep_read_commit(StringInfo in,
LogicalRepCommitData *commit_data)
}

 /*
+ * Write PREPARE to the output stream.
+ */
+void
+logicalrep_write_prepare(StringInfo out, ReorderBufferTXN *txn,

"2PC transactions" --> "two-phase commit transactions"

;

COMMENT
Line 133

Assert(strlen(txn->gid) > 0);
Shouldn't that assertion also check txn->gid is not NULL (to prevent
NPE in case gid was NULL)

;

COMMENT
Line 177
+logicalrep_read_prepare(StringInfo in, LogicalRepPrepareData * prepare_data)

prepare_data->prepare_type = flags;
This code may be OK but it does seem a bit of an abuse of the flags.

e.g. Are they flags or are the really enum values?
e.g. And if they are effectively enums (it appears they are) then
seemed inconsistent that |= was used when they were previously
assigned.

;

==========
Patch V6-0003, File: src/backend/replication/logical/worker.c
==========

COMMENT
Line 757
@@ -749,6 +753,141 @@ apply_handle_commit(StringInfo s)
pgstat_report_activity(STATE_IDLE, NULL);
}

+static void
+apply_handle_prepare_txn(LogicalRepPrepareData * prepare_data)
+{
+ Assert(prepare_data->prepare_lsn == remote_final_lsn);

Missing function comment to say this is called from apply_handle_prepare.

;

COMMENT
Line 798
+apply_handle_commit_prepared_txn(LogicalRepPrepareData * prepare_data)

Missing function comment to say this is called from apply_handle_prepare.

;

COMMENT
Line 824
+apply_handle_rollback_prepared_txn(LogicalRepPrepareData * prepare_data)

Missing function comment to say this is called from apply_handle_prepare.

==========
Patch V6-0003, File: src/backend/replication/pgoutput/pgoutput.c
==========

COMMENT
Line 50
@@ -47,6 +47,12 @@ static void pgoutput_truncate(LogicalDecodingContext *ctx,
 ReorderBufferChange *change);
 static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
 RepOriginId origin_id);
+static void pgoutput_prepare_txn(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr prepare_lsn);

The parameter indentation (2nd lines) does not match everything else
in this context.

;

COMMENT
Line 152
@@ -143,6 +149,10 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
 cb->change_cb = pgoutput_change;
 cb->truncate_cb = pgoutput_truncate;
 cb->commit_cb = pgoutput_commit_txn;
+
+ cb->prepare_cb = pgoutput_prepare_txn;
+ cb->commit_prepared_cb = pgoutput_commit_prepared_txn;
+ cb->abort_prepared_cb = pgoutput_abort_prepared_txn;

Remove the unnecessary blank line.

;

QUESTION
Line 386
@@ -373,7 +383,49 @@ pgoutput_commit_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
OutputPluginUpdateProgress(ctx);

 OutputPluginPrepareWrite(ctx, true);
- logicalrep_write_commit(ctx->out, txn, commit_lsn);
+ logicalrep_write_commit(ctx->out, txn, commit_lsn, true);

Is the is_commit parameter of logicalrep_write_commit ever passed as false?
If yes, where?
If no, the what is the point of it?

;

COMMENT
Line 408
+pgoutput_commit_prepared_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,

Since all this function is identical to pg_output_prepare it might be
better to either
1. just leave this as a wrapper to delegate to that function
2. remove this one entirely and assign the callback to the common
pgoutput_prepare_txn

;

COMMENT
Line 419
+pgoutput_abort_prepared_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,

Since all this function is identical to pg_output_prepare if might be
better to either
1. just leave this as a wrapper to delegate to that function
2. remove this one entirely and assign the callback to the common
pgoutput_prepare_tx

;

COMMENT
Line 419
+pgoutput_abort_prepared_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,

Shouldn't this comment say be "ROLLBACK PREPARED"?

;

==========
Patch V6-0003, File: src/include/replication/logicalproto.h
==========

QUESTION
Line 101
@@ -87,20 +87,55 @@ typedef struct LogicalRepBeginData
TransactionId xid;
} LogicalRepBeginData;

+/* Commit (and abort) information */

#define LOGICALREP_IS_ABORT 0x02
Is there a good reason why this is not called:
#define LOGICALREP_IS_ROLLBACK 0x02

;

COMMENT
Line 105

((flags == LOGICALREP_IS_COMMIT) || (flags == LOGICALREP_IS_ABORT))

Macros would be safer if flags are in parentheses
(((flags) == LOGICALREP_IS_COMMIT) || ((flags) == LOGICALREP_IS_ABORT))

;

COMMENT
Line 115

Unexpected whitespace for the typedef
"} LogicalRepPrepareData;"

;

COMMENT
Line 122
/* prepare can be exactly one of PREPARE, [COMMIT|ABORT] PREPARED*/
#define PrepareFlagsAreValid(flags) \
((flags == LOGICALREP_IS_PREPARE) || \
(flags == LOGICALREP_IS_COMMIT_PREPARED) || \
(flags == LOGICALREP_IS_ROLLBACK_PREPARED))

There is confusing mixture in macros and comments of ABORT and ROLLBACK terms
"[COMMIT|ABORT] PREPARED" --> "[COMMIT|ROLLBACK] PREPARED"

~

Also, it would be safer if flags are in parentheses
(((flags) == LOGICALREP_IS_PREPARE) || \
((flags) == LOGICALREP_IS_COMMIT_PREPARED) || \
((flags) == LOGICALREP_IS_ROLLBACK_PREPARED))

;

==========
Patch V6-0003, File: src/test/subscription/t/020_twophase.pl
==========

COMMENT
Line 131 - # check inserts are visible

Isn't this supposed to be checking for rows 12 and 13, instead of 11 and 12?

;

==========
Patch V6-0004, File: contrib/test_decoding/test_decoding.c
==========

COMMENT
Line 81
@@ -78,6 +78,15 @@ static void
pg_decode_stream_stop(LogicalDecodingContext *ctx,
 static void pg_decode_stream_abort(LogicalDecodingContext *ctx,
 ReorderBufferTXN *txn,
 XLogRecPtr abort_lsn);
+static void pg_decode_stream_prepare(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn);
+static

All these functions have a 3rd parameter called commit_lsn. Even
though the functions are not commit related. It seems like a cut/paste
error.

;

COMMENT
Line 142
@@ -130,6 +139,9 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
 cb->stream_start_cb = pg_decode_stream_start;
 cb->stream_stop_cb = pg_decode_stream_stop;
 cb->stream_abort_cb = pg_decode_stream_abort;
+ cb->stream_prepare_cb = pg_decode_stream_prepare;
+ cb->stream_commit_prepared_cb = pg_decode_stream_commit_prepared;
+ cb->stream_abort_prepared_cb = pg_decode_stream_abort_prepared;
 cb->stream_commit_cb = pg_decode_stream_commit;

Can the "cb->stream_abort_prepared_cb" be changed to
"cb->stream_rollback_prepared_cb"?

;

COMMENT
Line 827
@@ -812,6 +824,78 @@ pg_decode_stream_abort(LogicalDecodingContext *ctx,
}

 static void
+pg_decode_stream_prepare(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn)
+{
+ TestDecodingData *data = ctx->output_plugin_pr

The commit_lsn (3rd parameter) is unused and seems like a cut/paste name error.

;

COMMENT
Line 875
+pg_decode_stream_abort_prepared(LogicalDecodingContext *ctx,

The commit_lsn (3rd parameter) is unused and seems like a cut/paste name error.

;

==========
Patch V6-0004, File: doc/src/sgml/logicaldecoding.sgml
==========

COMMENT
48.6.1
@@ -396,6 +396,9 @@ typedef struct OutputPluginCallbacks
 LogicalDecodeStreamStartCB stream_start_cb;
 LogicalDecodeStreamStopCB stream_stop_cb;
 LogicalDecodeStreamAbortCB stream_abort_cb;
+ LogicalDecodeStreamPrepareCB stream_prepare_cb;
+ LogicalDecodeStreamCommitPreparedCB stream_commit_prepared_cb;
+ LogicalDecodeStreamAbortPreparedCB stream_abort_prepared_cb;

Same question from previous review comments - why using the
terminology "abort" instead of "rollback"

;

COMMENT
48.6.1
@@ -418,7 +421,9 @@ typedef void (*LogicalOutputPluginInit) (struct
OutputPluginCallbacks *cb);
 in-progress transactions. The <function>stream_start_cb</function>,
 <function>stream_stop_cb</function>, <function>stream_abort_cb</function>,
 <function>stream_commit_cb</function> and <function>stream_change_cb</function>
- are required, while <function>stream_message_cb</function> and
+ are required, while <function>stream_message_cb</function>,
+ <function>stream_prepare_cb</function>,
<function>stream_commit_prepared_cb</function>,
+ <function>stream_abort_prepared_cb</function>,

Missing "and".
... "stream_abort_prepared_cb, stream_truncate_cb are optional." -->
"stream_abort_prepared_cb, and stream_truncate_cb are optional."

;

COMMENT
Section 48.6.4.16
Section 48.6.4.17
Section 48.6.4.18
@@ -839,6 +844,45 @@ typedef void (*LogicalDecodeStreamAbortCB)
(struct LogicalDecodingContext *ctx,
</para>
</sect3>

+ <sect3 id="logicaldecoding-output-plugin-stream-prepare">
+ <title>Stream Prepare Callback</title>
+ <para>
+ The <function>stream_prepare_cb</function> callback is called to prepare
+ a previously streamed transaction as part of a two phase commit.
+<programlisting>
+typedef void (*LogicalDecodeStreamPrepareCB) (struct
LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr abort_lsn);
+</programlisting>
+ </para>
+ </sect3>
+
+ <sect3 id="logicaldecoding-output-plugin-stream-commit-prepared">
+ <title>Stream Commit Prepared Callback</title>
+ <para>
+ The <function>stream_commit_prepared_cb</function> callback is
called to commit prepared
+ a previously streamed transaction as part of a two phase commit.
+<programlisting>
+typedef void (*LogicalDecodeStreamCommitPreparedCB) (struct
LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr abort_lsn);
+</programlisting>
+ </para>
+ </sect3>
+
+ <sect3 id="logicaldecoding-output-plugin-stream-abort-prepared">
+ <title>Stream Abort Prepared Callback</title>
+ <para>
+ The <function>stream_abort_prepared_cb</function> callback is called
to abort prepared
+ a previously streamed transaction as part of a two phase commit.
+<programlisting>
+typedef void (*LogicalDecodeStreamAbortPreparedCB) (struct
LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr abort_lsn);
+</programlisting>
+ </para>
+ </sect3>

1. Everywhere it says "two phase" commit should be consistently
replaced to say "two-phase" commit (with the hyphen)

2. Search for "abort_lsn" parameter. It seems to be overused
(cut/paste error) even when the API is unrelated to abort

3. 48.6.4.17 and 48.6.4.18
Is this wording ok? Is the word "prepared" even necessary here?
- "... called to commit prepared a previously streamed transaction ..."
- "... called to abort prepared a previously streamed transaction ..."

;

COMMENT
Section 48.9
@@ -1017,9 +1061,13 @@ OutputPluginWrite(ctx, true);
 When streaming an in-progress transaction, the changes (and messages) are
 streamed in blocks demarcated by <function>stream_start_cb</function>
 and <function>stream_stop_cb</function> callbacks. Once all the decoded
- changes are transmitted, the transaction is committed using the
- <function>stream_commit_cb</function> callback (or possibly aborted using
- the <function>stream_abort_cb</function> callback).
+ changes are transmitted, the transaction can be committed using the
+ the <function>stream_commit_cb</function> callback

"two phase" --> "two-phase"

~

Also, Missing period on end of sentence.
"or aborted using the stream_abort_prepared_cb" --> "or aborted using
the stream_abort_prepared_cb."

;

==========
Patch V6-0004, File: src/backend/replication/logical/logical.c
==========

COMMENT
Line 84
@@ -81,6 +81,12 @@ static void stream_stop_cb_wrapper(ReorderBuffer
*cache, ReorderBufferTXN *txn,
 XLogRecPtr last_lsn);
 static void stream_abort_cb_wrapper(ReorderBuffer *cache,
ReorderBufferTXN *txn,
 XLogRecPtr abort_lsn);
+static void stream_prepare_cb_wrapper(ReorderBuffer *cache,
ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn);
+static void stream_commit_prepared_cb_wrapper(ReorderBuffer *cache,
ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn);
+static void stream_abort_prepared_cb_wrapper(ReorderBuffer *cache,
ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn);

The 3rd parameter is always "commit_lsn" even for API unrelated to
commit, so seems like cut/paste error.

;

COMMENT
Line 1246
@@ -1231,6 +1243,105 @@ stream_abort_cb_wrapper(ReorderBuffer *cache,
ReorderBufferTXN *txn,
}

 static void
+stream_prepare_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;

Misnamed parameter "commit_lsn" ?

~

Also, Line 1272
There seem to be some missing integrity checking to make sure the
callback is not NULL.
A null callback will give NPE when wrapper attempts to call it

;

COMMENT
Line 1305
+static void
+stream_commit_prepared_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,

There seem to be some missing integrity checking to make sure the
callback is not NULL.
A null callback will give NPE when wrapper attempts to call it.

;

COMMENT
Line 1312
+static void
+stream_abort_prepared_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,

Misnamed parameter "commit_lsn" ?

~

Also, Line 1338
There seem to be some missing integrity checking to make sure the
callback is not NULL.
A null callback will give NPE when wrapper attempts to call it.

==========
Patch V6-0004, File: src/backend/replication/logical/reorderbuffer.c
==========

COMMENT
Line 2684
@@ -2672,15 +2681,31 @@ ReorderBufferFinishPrepared(ReorderBuffer *rb,
TransactionId xid,
txn->gid = palloc(strlen(gid) + 1); /* trailing '\0' */
strcpy(txn->gid, gid);

- if (is_commit)
+ if (rbtxn_is_streamed(txn))
 {
- txn->txn_flags |= RBTXN_COMMIT_PREPARED;
- rb->commit_prepared(rb, txn, commit_lsn);
+ if (is_commit)
+ {
+ txn->txn_flags |= RBTXN_COMMIT_PREPARED;

The setting/checking of the flags could be refactored if you wanted to
write less code:
e.g.
if (is_commit)
txn->txn_flags |= RBTXN_COMMIT_PREPARED;
else
txn->txn_flags |= RBTXN_ROLLBACK_PREPARED;

if (rbtxn_is_streamed(txn) && rbtxn_commit_prepared(txn))
rb->stream_commit_prepared(rb, txn, commit_lsn);
else if (rbtxn_is_streamed(txn) && rbtxn_rollback_prepared(txn))
rb->stream_abort_prepared(rb, txn, commit_lsn);
else if (rbtxn_commit_prepared(txn))
rb->commit_prepared(rb, txn, commit_lsn);
else if (rbtxn_rollback_prepared(txn))
rb->abort_prepared(rb, txn, commit_lsn);

;

==========
Patch V6-0004, File: src/include/replication/output_plugin.h
==========

COMMENT
Line 171
@@ -157,6 +157,33 @@ typedef void (*LogicalDecodeStreamAbortCB)
(struct LogicalDecodingContext *ctx,
XLogRecPtr abort_lsn);

 /*
+ * Called to prepare changes streamed to remote node from in-progress
+ * transaction. This is called as part of a two-phase commit and only when
+ * two-phased commits are supported
+ */

1. Missing period all these comments.

2. Is the part that says "and only where two-phased commits are
supported" necessary to say? Is seems redundant since comments already
says called as part of a two-phase commit.

;

==========
Patch V6-0004, File: src/include/replication/reorderbuffer.h
==========

COMMENT
Line 467
@@ -466,6 +466,24 @@ typedef void (*ReorderBufferStreamAbortCB) (
ReorderBufferTXN *txn,
XLogRecPtr abort_lsn);

+/* prepare streamed transaction callback signature */
+typedef void (*ReorderBufferStreamPrepareCB) (
+ ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn);
+
+/* prepare streamed transaction callback signature */
+typedef void (*ReorderBufferStreamCommitPreparedCB) (
+ ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn);
+
+/* prepare streamed transaction callback signature */
+typedef void (*ReorderBufferStreamAbortPreparedCB) (
+ ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn);

Cut/paste error - repeated same comment 3 times?

[END]

#47Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Nikhil Sontakke (#1)

On Tue, Oct 6, 2020 at 10:23 AM Peter.B.Smith@fujitsu.com
<Peter.B.Smith@fujitsu.com> wrote:

[BEGIN]

==========
Patch V6-0001, File: contrib/test_decoding/expected/prepared.out (so
prepared.sql also)
==========

COMMENT
Line 30 - The INSERT INTO test_prepared1 VALUES (2); is kind of
strange because it is not really part of the prior test nor the
following test. Maybe it would be better to have a comment describing
the purpose of this isolated INSERT and to also consume the data from
the slot so it does not get jumbled with the data of the following
(abort) test.

;

COMMENT
Line 53 - Same comment for this test INSERT INTO test_prepared1 VALUES
(4); It kind of has nothing really to do with either the prior (abort)
test nor the following (ddl) test.

;

COMMENT
Line 60 - Seems to check which locks are held for the test_prepared_1
table while the transaction is in progress. Maybe it would be better
to have more comments describing what is expected here and why.

;

COMMENT
Line 88 - There is a comment in the test saying "-- We should see '7'
before '5' in our results since it commits first." but I did not see
any test code that actually verifies that happens.

;

All the above comments are genuine and I think it is mostly because
the author has blindly modified the existing tests without completely
understanding the intent of the test. I suggest we write a completely
new regression file (decode_prepared.sql) for these and just copy
whatever is required from prepared.sql. Once we do that we might also
want to rename existing prepared.sql to decode_commit_prepared.sql or
something like that. I think modifying the existing test appears to be
quite ugly and also it is changing the intent of the existing tests.

QUESTION
Line 120 - I did not really understand the SQL checking the pg_class.
I expected this would be checking table 'test_prepared1' instead. Can
you explain it?
SELECT 'pg_class' AS relation, locktype, mode
FROM pg_locks
WHERE locktype = 'relation'
AND relation = 'pg_class'::regclass;
relation | locktype | mode
----------+----------+------
(0 rows)

;

Yes, I also think your expectation is correct and this should be on
'test_prepared_1'.

QUESTION
Line 139 - SET statement_timeout = '1s'; is 1 seconds short enough
here for this test, or might it be that these statements would be
completed in less than one seconds anyhow?

;

Good question. I think we have to mention the reason why logical
decoding is not blocked while it needs to acquire a shared lock on the
table and the previous commands already held an exclusive lock on the
table. I am not sure if I am missing something but like you, it is not
clear to me as well what this test intends to do, so surely more
commentary is required.

QUESTION
Line 163 - How is this testing a SAVEPOINT? Or is it only to check
that the SAVEPOINT command is not part of the replicated changes?

;

It is more of testing that subtransactions will not create a problem
while decoding.

COMMENT
Line 175 - Missing underscore in comment. Code requires also underscore:
"nodecode" --> "_nodecode"

makes sense.

==========
Patch V6-0001, File: contrib/test_decoding/test_decoding.c
==========

COMMENT
Line 43
@@ -36,6 +40,7 @@ typedef struct
bool skip_empty_xacts;
bool xact_wrote_changes;
bool only_local;
+ TransactionId check_xid; /* track abort of this txid */
} TestDecodingData;

The "check_xid" seems a meaningless name. Check what?
IIUC maybe should be something like "check_xid_aborted"

;

COMMENT
Line 105
@ -88,6 +93,19 @@ static void
pg_decode_stream_truncate(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
int nrelations, Relation relations[],
ReorderBufferChange *change);
+static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,

Remove extra blank line after these functions

;

The above two sounds reasonable suggestions.

COMMENT
Line 149
@@ -116,6 +134,11 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_change_cb = pg_decode_stream_change;
cb->stream_message_cb = pg_decode_stream_message;
cb->stream_truncate_cb = pg_decode_stream_truncate;
+ cb->filter_prepare_cb = pg_decode_filter_prepare;
+ cb->prepare_cb = pg_decode_prepare_txn;
+ cb->commit_prepared_cb = pg_decode_commit_prepared_txn;
+ cb->abort_prepared_cb = pg_decode_abort_prepared_txn;
+
}

There is a confusing mix of terminology where sometimes things are
referred as ROLLBACK/rollback and other times apparently the same
operation is referred as ABORT/abort. I do not know the root cause of
this mixture. IIUC maybe the internal functions and protocol generally
use the term "abort", whereas the SQL syntax is "ROLLBACK"... but
where those two terms collide in the middle it gets quite confusing.

At least I thought the names of the "callbacks" which get exposed to
the user (e.g. in the help) might be better if they would match the
SQL.
"abort_prepared_cb" --> "rollback_prepared_db"

This suggestion sounds reasonable. I think it is to entertain the case
where due to error we need to rollback the transaction. I think it is
better if use 'rollback' terminology in the exposed functions. We
already have a function with the name stream_abort_cb in the code
which we also might want to rename but that is a separate thing and we
can deal it with a separate patch.

There are similar review comments like this below where the
alternating terms caused me some confusion.

~

Also, Remove the extra blank line before the end of the function.

;

COMMENT
Line 267
@ -227,6 +252,42 @@ pg_decode_startup(LogicalDecodingContext *ctx,
OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "two-phase-commit") == 0)
+ {
+ if (elem->arg == NULL)
+ continue;

IMO the "check-xid" code might be better rearranged so the NULL check
is first instead of if/else.
e.g.
if (elem->arg == NULL)
ereport(FATAL,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("check-xid needs an input value")));
~

Also, is it really supposed to be FATAL instead or ERROR. That is not
the same as the other surrounding code.

;

+1.

COMMENT
Line 296
if (data->check_xid <= 0)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("Specify positive value for parameter \"%s\","
" you specified \"%s\"",
elem->defname, strVal(elem->arg))));

The code checking for <= 0 seems over-complicated. Because conversion
was using strtoul() I fail to see how this can ever be < 0. Wouldn't
it be easier to simply test the result of the strtoul() function?

BEFORE: if (errno == EINVAL || errno == ERANGE)
AFTER: if (data->check_xid == 0)

Better to use TransactionIdIsValid(data->check_xid) here.

~

Also, should this be FATAL? Everything else similar is ERROR.

;

It should be an error.

COMMENT
(general)
I don't recall seeing any of these decoding options (e.g.
"two-phase-commit", "check-xid") documented anywhere.
So how can a user even know these options exist so they can use them?
Perhaps options should be described on this page?
https://www.postgresql.org/docs/13/functions-admin.html#FUNCTIONS-REPLICATION

;

I think we should do what we are doing for other options, if they are
not documented then why to document this one separately. I guess we
can make a case to document all the existing options and write a
separate patch for that.

COMMENT
(general)
"check-xid" is a meaningless option name. Maybe something like
"checked-xid-aborted" is more useful?
Suggest changing the member, the option, and the error messages to
match some better name.

;

COMMENT
Line 314
@@ -238,6 +299,7 @@ pg_decode_startup(LogicalDecodingContext *ctx,
OutputPluginOptions *opt,
}

ctx->streaming &= enable_streaming;
+ ctx->enable_twophase &= enable_2pc;
}

The "ctx->enable_twophase" is inconsistent naming with the
"ctx->streaming" member.
"enable_twophase" --> "twophase"

;

+1.

COMMENT
Line 374
@@ -297,6 +359,94 @@ pg_decode_commit_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}

+
+/*
+ * Filter out two-phase transactions.
+ *
+ * Each plugin can implement its own filtering logic. Here
+ * we demonstrate a simple logic by checking the GID. If the
+ * GID contains the "_nodecode" substring, then we filter
+ * it out.
+ */
+static bool
+pg_decode_filter_prepare(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,

Remove the extra preceding blank line.

~

I did not find anything in the help about "_nodecode". Should it be
there or is this deliberately not documented feature?

;

I guess we can document it along with filter_prepare API, if not
already documented.

QUESTION
Line 440
+pg_decode_abort_prepared_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,

Is this a wrong comment
"ABORT PREPARED" --> "ROLLBACK PREPARED" ??

;

COMMENT
Line 620
@@ -455,6 +605,22 @@ pg_decode_change(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
}
data->xact_wrote_changes = true;

+ /* if check_xid is specified */
+ if (TransactionIdIsValid(data->check_xid))
+ {
+ elog(LOG, "waiting for %u to abort", data->check_xid);
+ while (TransactionIdIsInProgress(dat

The check_xid seems a meaningless name, and the comment "/* if
check_xid is specified */" was not helpful either.
IIUC purpose of this is to check that the nominated xid always is rolled back.
So the appropriate name may be more like "check-xid-aborted".

;

Yeah, this part deserves better comments.

--
With Regards,
Amit Kapila.

#48Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Nikhil Sontakke (#1)

On Tue, Oct 6, 2020 at 10:23 AM Peter.B.Smith@fujitsu.com
<Peter.B.Smith@fujitsu.com> wrote:

==========
Patch V6-0001, File: doc/src/sgml/logicaldecoding.sgml
==========

COMMENT/QUESTION
Section 48.6.1
@ -387,6 +387,10 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeFilterPrepareCB filter_prepare_cb;

Confused by the mixing of terminologies "abort" and "rollback".
Why is it LogicalDecodeAbortPreparedCB instead of
LogicalDecodeRollbackPreparedCB?
Why is it abort_prepared_cb instead of rollback_prepared_cb;?

I thought everything the user sees should be ROLLBACK/rollback (like
the SQL) regardless of what the internal functions might be called.

;

Fair enough.

COMMENT
Section 48.6.1
The begin_cb, change_cb and commit_cb callbacks are required, while
startup_cb, filter_by_origin_cb, truncate_cb, and shutdown_cb are
optional. If truncate_cb is not set but a TRUNCATE is to be decoded,
the action will be ignored.

The 1st paragraph beneath the typedef does not mention the newly added
callbacks to say if they are required or optional.

;

Yeah, in code comments it was mentioned but is missed here, see the
comment "To support two phase logical decoding, we require
prepare/commit-prepare/abort-prepare callbacks. The filter-prepare
callback is optional.". I think instead of directly editing the above
paragraph we can write a new one similar to what we have done for
streaming of large in-progress transactions (Refer <para> An output
plugin may also define functions to support streaming of large,
in-progress transactions.).

COMMENT
Section 48.6.4.5
Section 48.6.4.6
Section 48.6.4.7
@@ -578,6 +588,55 @@ typedef void (*LogicalDecodeCommitCB) (struct
LogicalDecodingContext *ctx,
</para>
</sect3>

+ <sect3 id="logicaldecoding-output-plugin-prepare">
+    <sect3 id="logicaldecoding-output-plugin-commit-prepared">
+    <sect3 id="logicaldecoding-output-plugin-abort-prepared">
+<programlisting>

The wording and titles are a bit backwards compared to the others.
e.g. previously was "Transaction Begin" (not "Begin Transaction") and
"Transaction End" (not "End Transaction").

So for consistently following the existing IMO should change these new
titles (and wording) to:
- "Commit Prepared Transaction Callback" --> "Transaction Commit
Prepared Callback"
- "Rollback Prepared Transaction Callback" --> "Transaction Rollback
Prepared Callback"

makes sense.

- "whenever a commit prepared transaction has been decoded" -->
"whenever a transaction commit prepared has been decoded"
- "whenever a rollback prepared transaction has been decoded." -->
"whenever a transaction rollback prepared has been decoded."

;

I don't find above suggestions much better than current wording. How
about below instead?

"whenever we decode a transaction which is prepared for two-phase
commit is committed"
"whenever we decode a transaction which is prepared for two-phase
commit is rolled back"

Also, related to this:
+    <sect3 id="logicaldecoding-output-plugin-commit-prepared">
+     <title>Commit Prepared Transaction Callback</title>
+
+     <para>
+      The optional <function>commit_prepared_cb</function> callback
is called whenever
+      a commit prepared transaction has been decoded. The
<parameter>gid</parameter> field,
+      which is part of the <parameter>txn</parameter> parameter can
be used in this
+      callback.
+<programlisting>
+typedef void (*LogicalDecodeCommitPreparedCB) (struct
LogicalDecodingContext *ctx,
+                                               ReorderBufferTXN *txn,
+                                               XLogRecPtr commit_lsn);
+</programlisting>
+     </para>
+    </sect3>
+
+    <sect3 id="logicaldecoding-output-plugin-abort-prepared">
+     <title>Rollback Prepared Transaction Callback</title>
+
+     <para>
+      The optional <function>abort_prepared_cb</function> callback is
called whenever
+      a rollback prepared transaction has been decoded. The
<parameter>gid</parameter> field,
+      which is part of the <parameter>txn</parameter> parameter can
be used in this
+      callback.
+<programlisting>

Both the above are not optional as per code and I think code is
correct. I think the documentation is wrong here.

==========
Patch V6-0001, File: src/backend/replication/logical/decode.c
==========

COMMENT
Line 74
@@ -70,6 +70,9 @@ static void DecodeCommit(LogicalDecodingContext
*ctx, XLogRecordBuffer *buf,
xl_xact_parsed_commit *parsed, TransactionId xid);
static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_abort *parsed, TransactionId xid);
+static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
+ xl_xact_parsed_prepare * parsed);

The 2nd line of DecodePrepare is misaligned by one space.

;

Yeah, probably pgindent is the answer. Ajin, can you please run
pgindent on all the patches?

COMMENT
Line 321
@@ -312,17 +315,34 @@ DecodeXactOp(LogicalDecodingContext *ctx,
XLogRecordBuffer *buf)
}
break;
case XLOG_XACT_PREPARE:
+ {
+ xl_xact_parsed_prepare parsed;
+ xl_xact_prepare *xlrec;
+ /* check that output plugin is capable of twophase decoding */

"twophase" --> "two-phase"

~

Also, add a blank line after the declarations.

;

==========
Patch V6-0001, File: src/backend/replication/logical/logical.c
==========

COMMENT
Line 249
@@ -225,6 +237,19 @@ StartupDecodingContext(List *output_plugin_options,
(ctx->callbacks.stream_message_cb != NULL) ||
(ctx->callbacks.stream_truncate_cb != NULL);

+ /*
+ * To support two phase logical decoding, we require
prepare/commit-prepare/abort-prepare
+ * callbacks. The filter-prepare callback is optional. We however
enable two phase logical
+ * decoding when at least one of the methods is enabled so that we
can easily identify
+ * missing methods.

The terminology is generally well known as "two-phase" (with the
hyphen) https://en.wikipedia.org/wiki/Two-phase_commit_protocol so
let's be consistent for all the patch code comments. Please search the
code and correct this in all places, even where I might have missed to
identify it.

"two phase" --> "two-phase"

;

COMMENT
Line 822
@@ -782,6 +807,111 @@ commit_cb_wrapper(ReorderBuffer *cache,
ReorderBufferTXN *txn,
}

static void
+prepare_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr prepare_lsn)

"support 2 phase" --> "supports two-phase" in the comment

;

COMMENT
Line 844
Code condition seems strange and/or broken.
if (ctx->enable_twophase && ctx->callbacks.prepare_cb == NULL)
Because if the flag is null then this condition is skipped.
But then if the callback was also NULL then attempting to call it to
"do the actual work" will give NPE.

~

Also, I wonder should this check be the first thing in this function?
Because if it fails does it even make sense that all the errcallback
code was set up?> E.g errcallback.arg potentially is left pointing to a stack variable
on a stack that no longer exists.

;

Right, I think we should have an Assert(ctx->enable_twophase) in the
beginning and then have the check (ctx->callbacks.prepare_cb == NULL)
t its current place. Refer any of the streaming APIs (for ex.
stream_stop_cb_wrapper).

COMMENT
Line 857
+commit_prepared_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,

"support 2 phase" --> "supports two-phase" in the comment

~

Also, Same potential trouble with the condition:
if (ctx->enable_twophase && ctx->callbacks.commit_prepared_cb == NULL)
Same as previously asked. Should this check be first thing in this function?

;

Yeah, so the same solution as mentioned above can be used.

COMMENT
Line 892
+abort_prepared_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,

"support 2 phase" --> "supports two-phase" in the comment

~

Same potential trouble with the condition:
if (ctx->enable_twophase && ctx->callbacks.abort_prepared_cb == NULL)
Same as previously asked. Should this check be the first thing in this function?

;

Again the same solution can be used.

COMMENT
Line 1013
@@ -858,6 +988,51 @@ truncate_cb_wrapper(ReorderBuffer *cache,
ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}

+static bool
+filter_prepare_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ TransactionId xid, const char *gid)

Fix wording in comment:
"twophase" --> "two-phase transactions"
"twophase transactions" --> "two-phase transactions"

==========
Patch V6-0001, File: src/backend/replication/logical/reorderbuffer.c
==========

COMMENT
Line 255
@@ -251,7 +251,8 @@ static Size
ReorderBufferRestoreChanges(ReorderBuffer *rb, ReorderBufferTXN *txn
static void ReorderBufferRestoreChange(ReorderBuffer *rb,
ReorderBufferTXN *txn,
char *change);
static void ReorderBufferRestoreCleanup(ReorderBuffer *rb,
ReorderBufferTXN *txn);
-static void ReorderBufferTruncateTXN(ReorderBuffer *rb, ReorderBufferTXN *txn);
+static void ReorderBufferTruncateTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ bool txn_prepared);

The alignment is inconsistent. One more space needed before "bool txn_prepared"

;

COMMENT
Line 417
@@ -413,6 +414,11 @@ ReorderBufferReturnTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn)
}

/* free data that's contained */
+ if (txn->gid != NULL)
+ {
+ pfree(txn->gid);
+ txn->gid = NULL;
+ }

Should add the blank link before this new code, as it was before.

;

COMMENT
Line 1564
@ -1502,12 +1561,14 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn)
}

/*
- * Discard changes from a transaction (and subtransactions), after streaming
- * them. Keep the remaining info - transactions, tuplecids, invalidations and
- * snapshots.
+ * Discard changes from a transaction (and subtransactions), either
after streaming or
+ * after a PREPARE.

typo "snapshots.If" -> "snapshots. If"

;

COMMENT/QUESTION
Line 1590
@@ -1526,7 +1587,7 @@ ReorderBufferTruncateTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn)
Assert(rbtxn_is_known_subxact(subtxn));
Assert(subtxn->nsubtxns == 0);

- ReorderBufferTruncateTXN(rb, subtxn);
+ ReorderBufferTruncateTXN(rb, subtxn, txn_prepared);
}

There are some code paths here I did not understand how they match the comments.
Because this function is recursive it seems that it may be called
where the 2nd parameter txn is a sub-transaction.

But then this seems at odds with some of the other code comments of
this function which are processing the txn without ever testing is it
really toplevel or not:

e.g. Line 1593 "/* cleanup changes in the toplevel txn */"

I think this comment is wrong but this is not the fault of this patch.

e.g. Line 1632 "They are always stored in the toplevel transaction."

;

This seems to be correct and we probably need an Assert that the
transaction is a top-level transaction.

COMMENT
Line 1644
@@ -1560,9 +1621,33 @@ ReorderBufferTruncateTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn)
* about the toplevel xact (we send the XID in all messages), but we never
* stream XIDs of empty subxacts.
*/
- if ((!txn->toptxn) || (txn->nentries_mem != 0))
+ if ((!txn_prepared) && ((!txn->toptxn) || (txn->nentries_mem != 0)))
txn->txn_flags |= RBTXN_IS_STREAMED;

+ if (txn_prepared)

/* remove the change from it's containing list */
typo "it's" --> "its"

;

QUESTION
Line 1977
@@ -1880,7 +1965,7 @@ ReorderBufferResetTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
ReorderBufferChange *specinsert)
{
/* Discard the changes that we just streamed */
- ReorderBufferTruncateTXN(rb, txn);
+ ReorderBufferTruncateTXN(rb, txn, false);

How do you know the 3rd parameter - i.e. txn_prepared - should be
hardwired false here?
e.g. I thought that maybe rbtxn_prepared(txn) can be true here.

;

COMMENT
Line 2345
@@ -2249,7 +2334,6 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
break;
}
}
-
/*

Looks like accidental blank line deletion. This should be put back how it was

;

COMMENT/QUESTION
Line 2374
@@ -2278,7 +2362,16 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
}
}
else
- rb->commit(rb, txn, commit_lsn);
+ {
+ /*
+ * Call either PREPARE (for twophase transactions) or COMMIT
+ * (for regular ones).

"twophase" --> "two-phase"

~

Also, I was confused by the apparent assumption of exclusiveness of
streaming and 2PC...
e.g. what if streaming AND 2PC then it won't do rb->prepare()

;

QUESTION
Line 2424
@@ -2319,11 +2412,17 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
*/
if (streaming)
{
- ReorderBufferTruncateTXN(rb, txn);
+ ReorderBufferTruncateTXN(rb, txn, false);

/* Reset the CheckXidAlive */
CheckXidAlive = InvalidTransactionId;
}
+ else if (rbtxn_prepared(txn))

I was confused by the exclusiveness of streaming/2PC.
e.g. what if streaming AND 2PC at same time - how can you pass false
as 3rd param to ReorderBufferTruncateTXN?

;

Yeah, this and another handling wherever it is assumed that both can't
be true together is wrong.

COMMENT
Line 2463
@@ -2352,17 +2451,18 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,

/*
* The error code ERRCODE_TRANSACTION_ROLLBACK indicates a concurrent
- * abort of the (sub)transaction we are streaming. We need to do the
+ * abort of the (sub)transaction we are streaming or preparing. We
need to do the
* cleanup and return gracefully on this error, see SetupCheckXidLive.
*/

"twoi phase" --> "two-phase"

;

QUESTIONS
Line 2482
@@ -2370,10 +2470,19 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
errdata = NULL;
curtxn->concurrent_abort = true;

- /* Reset the TXN so that it is allowed to stream remaining data. */
- ReorderBufferResetTXN(rb, txn, snapshot_now,
- command_id, prev_lsn,
- specinsert);
+ /* If streaming, reset the TXN so that it is allowed to stream
remaining data. */
+ if (streaming)

Re: /* If streaming, reset the TXN so that it is allowed to stream
remaining data. */
I was confused by the exclusiveness of streaming/2PC.
Is it not possible for streaming flags and rbtxn_prepared(txn) true at
the same time?

Yeah, I think it is not correct to assume that both can't be true at
the same time. But when prepared is true irrespective of whether
streaming is true or not we can use ReorderBufferTruncateTXN() API
instead of Reset API.

~

elog(LOG, "stopping decoding of %s (%u)",
txn->gid[0] != '\0'? txn->gid:"", txn->xid);

Is this a safe operation, or do you also need to test txn->gid is not NULL?

;

I think if 'prepared' is true then we can assume it to be non-NULL,
otherwise, not.

I am responding to your email in phases so that we can have a
discussion on specific points if required and I am slightly afraid
that the email might not bounce as it happened in your case when you
sent such a long email.

--
With Regards,
Amit Kapila.

#49Robert Haas
Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#47)

On Wed, Oct 7, 2020 at 1:24 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

There is a confusing mix of terminology where sometimes things are
referred as ROLLBACK/rollback and other times apparently the same
operation is referred as ABORT/abort. I do not know the root cause of
this mixture. IIUC maybe the internal functions and protocol generally
use the term "abort", whereas the SQL syntax is "ROLLBACK"... but
where those two terms collide in the middle it gets quite confusing.

At least I thought the names of the "callbacks" which get exposed to
the user (e.g. in the help) might be better if they would match the
SQL.
"abort_prepared_cb" --> "rollback_prepared_db"

This suggestion sounds reasonable. I think it is to entertain the case
where due to error we need to rollback the transaction. I think it is
better if use 'rollback' terminology in the exposed functions. We
already have a function with the name stream_abort_cb in the code
which we also might want to rename but that is a separate thing and we
can deal it with a separate patch.

So, for an ordinary transaction, rollback implies an explicit user
action, but an abort could either be an explicit user action (ABORT;
or ROLLBACK;) or an error. I agree that calling that case "abort"
rather than "rollback" is better. However, the situation is a bit
different for a prepared transaction: no error can prevent such a
transaction from being committed. That is the whole point of being
able to prepare transactions. So it is not unreasonable to think of
use "rollback" rather than "abort" for prepared transactions, but I
think it would be wrong in other cases. On the other hand, using
"abort" for all the cases also doesn't seem bad to me. It's true that
there is no ABORT PREPARED command at the SQL level, but I don't think
that is very important. I don't feel wrong saying that ROLLBACK
PREPARED causes a transaction abort.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#50Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Robert Haas (#49)

On Thu, Oct 8, 2020 at 6:14 AM Robert Haas <robertmhaas@gmail.com> wrote:

So, for an ordinary transaction, rollback implies an explicit user
action, but an abort could either be an explicit user action (ABORT;
or ROLLBACK;) or an error. I agree that calling that case "abort"
rather than "rollback" is better. However, the situation is a bit
different for a prepared transaction: no error can prevent such a
transaction from being committed. That is the whole point of being
able to prepare transactions. So it is not unreasonable to think of
use "rollback" rather than "abort" for prepared transactions, but I
think it would be wrong in other cases. On the other hand, using
"abort" for all the cases also doesn't seem bad to me. It's true that
there is no ABORT PREPARED command at the SQL level, but I don't think
that is very important. I don't feel wrong saying that ROLLBACK
PREPARED causes a transaction abort.

So, as I understand you don't object to renaming the callback APIs for
ROLLBACK PREPARED transactions to "rollback_prepared_cb" but keeping
the "stream_abort" as such. This was what I was planning on doing.
I was just writing this up, so wanted to confirm.

regards,
Ajin Cherian
Fujitsu Australia

#51Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Nikhil Sontakke (#1)

On Tue, Oct 6, 2020 at 10:23 AM Peter.B.Smith@fujitsu.com
<Peter.B.Smith@fujitsu.com> wrote:

==========
Patch V6-0001, File: src/include/replication/reorderbuffer.h
==========

QUESTION
Line 116
@@ -162,9 +163,13 @@ typedef struct ReorderBufferChange
#define RBTXN_HAS_CATALOG_CHANGES 0x0001
#define RBTXN_IS_SUBXACT 0x0002
#define RBTXN_IS_SERIALIZED 0x0004
-#define RBTXN_IS_STREAMED 0x0008
-#define RBTXN_HAS_TOAST_INSERT 0x0010
-#define RBTXN_HAS_SPEC_INSERT 0x0020
+#define RBTXN_PREPARE 0x0008
+#define RBTXN_COMMIT_PREPARED 0x0010
+#define RBTXN_ROLLBACK_PREPARED 0x0020
+#define RBTXN_COMMIT 0x0040
+#define RBTXN_IS_STREAMED 0x0080
+#define RBTXN_HAS_TOAST_INSERT 0x0100
+#define RBTXN_HAS_SPEC_INSERT 0x0200

I was wondering why when adding new flags, some of the existing flag
masks were also altered.
I am assuming this is ok because they are never persisted but are only
used in the protocol (??)

;

This is bad even though there is no direct problem. I don't think we
need to change the existing ones, we can add the new ones at the end
with the number starting where the last one ends.

COMMENT
Line 133

Assert(strlen(txn->gid) > 0);
Shouldn't that assertion also check txn->gid is not NULL (to prevent
NPE in case gid was NULL)

;

I think that would be better and a stronger Assertion than the current one.

COMMENT
Line 177
+logicalrep_read_prepare(StringInfo in, LogicalRepPrepareData * prepare_data)

prepare_data->prepare_type = flags;
This code may be OK but it does seem a bit of an abuse of the flags.

e.g. Are they flags or are the really enum values?
e.g. And if they are effectively enums (it appears they are) then
seemed inconsistent that |= was used when they were previously
assigned.

;

I don't understand this point. As far as I can see at the time of
write (logicalrep_write_prepare()), the patch has used |=, and at the
time of reading (logicalrep_read_prepare()) it has used assignment
which seems correct from the code perspective. Do you have a better
proposal?

COMMENT
Line 408
+pgoutput_commit_prepared_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,

Since all this function is identical to pg_output_prepare it might be
better to either
1. just leave this as a wrapper to delegate to that function
2. remove this one entirely and assign the callback to the common
pgoutput_prepare_txn

;

I think this is because as of now the patch uses the same function and
protocol message to send both Prepare and Commit/Rollback Prepare
which I am not sure is the right thing. I suggest keeping that code as
it is for now. Let's first try to figure out if it is a good idea to
overload the same protocol message and use flags to distinguish the
actual message. Also, I don't know whether prepare_lsn is required
during commit time?

COMMENT
Line 419
+pgoutput_abort_prepared_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,

Since all this function is identical to pg_output_prepare if might be
better to either
1. just leave this as a wrapper to delegate to that function
2. remove this one entirely and assign the callback to the common
pgoutput_prepare_tx

;

Due to reasons mentioned for the previous comment, let's keep this
also as it is for now.

--
With Regards,
Amit Kapila.

#52Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#51)

On Thu, Oct 8, 2020 at 5:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

COMMENT
Line 177
+logicalrep_read_prepare(StringInfo in, LogicalRepPrepareData * prepare_data)

prepare_data->prepare_type = flags;
This code may be OK but it does seem a bit of an abuse of the flags.

e.g. Are they flags or are the really enum values?
e.g. And if they are effectively enums (it appears they are) then
seemed inconsistent that |= was used when they were previously
assigned.

;

I don't understand this point. As far as I can see at the time of
write (logicalrep_write_prepare()), the patch has used |=, and at the
time of reading (logicalrep_read_prepare()) it has used assignment
which seems correct from the code perspective. Do you have a better
proposal?

OK. I will explain my thinking when I wrote that review comment.

I agree all is "correct" from a code perspective.

But IMO using bit arithmetic implies that different combinations are
also possible, whereas in current code they are not.
So code is kind of having a bet each-way - sometimes treating "flags"
as bit flags and sometimes as enums.

e.g. If these flags are not really bit flags at all then the
logicalrep_write_prepare() code might just as well be written as
below:

BEFORE
if (rbtxn_commit_prepared(txn))
flags |= LOGICALREP_IS_COMMIT_PREPARED;
else if (rbtxn_rollback_prepared(txn))
flags |= LOGICALREP_IS_ROLLBACK_PREPARED;
else
flags |= LOGICALREP_IS_PREPARE;

/* Make sure exactly one of the expected flags is set. */
if (!PrepareFlagsAreValid(flags))
elog(ERROR, "unrecognized flags %u in prepare message", flags);

AFTER
if (rbtxn_commit_prepared(txn))
flags = LOGICALREP_IS_COMMIT_PREPARED;
else if (rbtxn_rollback_prepared(txn))
flags = LOGICALREP_IS_ROLLBACK_PREPARED;
else
flags = LOGICALREP_IS_PREPARE;

~

OTOH, if you really do want to anticipate having future flag bit
combinations then maybe the PrepareFlagsAreValid() macro ought to to
be tweaked accordingly, and the logicalrep_read_prepare() code maybe
should look more like below:

BEFORE
/* set the action (reuse the constants used for the flags) */
prepare_data->prepare_type = flags;

AFTER
/* set the action (reuse the constants used for the flags) */
prepare_data->prepare_type =
flags & LOGICALREP_IS_COMMIT_PREPARED ? LOGICALREP_IS_COMMIT_PREPARED :
flags & LOGICALREP_IS_ROLLBACK_PREPARED ? LOGICALREP_IS_ROLLBACK_PREPARED :
LOGICALREP_IS_PREPARE;

Kind Regards.
Peter Smith
Fujitsu Australia

#53Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#52)

On Fri, Oct 9, 2020 at 5:45 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Thu, Oct 8, 2020 at 5:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

COMMENT
Line 177
+logicalrep_read_prepare(StringInfo in, LogicalRepPrepareData * prepare_data)

prepare_data->prepare_type = flags;
This code may be OK but it does seem a bit of an abuse of the flags.

e.g. Are they flags or are the really enum values?
e.g. And if they are effectively enums (it appears they are) then
seemed inconsistent that |= was used when they were previously
assigned.

;

I don't understand this point. As far as I can see at the time of
write (logicalrep_write_prepare()), the patch has used |=, and at the
time of reading (logicalrep_read_prepare()) it has used assignment
which seems correct from the code perspective. Do you have a better
proposal?

OK. I will explain my thinking when I wrote that review comment.

I agree all is "correct" from a code perspective.

But IMO using bit arithmetic implies that different combinations are
also possible, whereas in current code they are not.
So code is kind of having a bet each-way - sometimes treating "flags"
as bit flags and sometimes as enums.

e.g. If these flags are not really bit flags at all then the
logicalrep_write_prepare() code might just as well be written as
below:

BEFORE
if (rbtxn_commit_prepared(txn))
flags |= LOGICALREP_IS_COMMIT_PREPARED;
else if (rbtxn_rollback_prepared(txn))
flags |= LOGICALREP_IS_ROLLBACK_PREPARED;
else
flags |= LOGICALREP_IS_PREPARE;

/* Make sure exactly one of the expected flags is set. */
if (!PrepareFlagsAreValid(flags))
elog(ERROR, "unrecognized flags %u in prepare message", flags);

AFTER
if (rbtxn_commit_prepared(txn))
flags = LOGICALREP_IS_COMMIT_PREPARED;
else if (rbtxn_rollback_prepared(txn))
flags = LOGICALREP_IS_ROLLBACK_PREPARED;
else
flags = LOGICALREP_IS_PREPARE;

~

OTOH, if you really do want to anticipate having future flag bit
combinations

I don't anticipate more combinations rather I am not yet sure whether
we want to distinguish these operations with flags or have separate
messages for each of these operations. I think for now we can go with
your proposal above.

--
With Regards,
Amit Kapila.

#54Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#47)
4 attachment(s)

On Wed, Oct 7, 2020 at 4:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

All the above comments are genuine and I think it is mostly because
the author has blindly modified the existing tests without completely
understanding the intent of the test. I suggest we write a completely
new regression file (decode_prepared.sql) for these and just copy
whatever is required from prepared.sql. Once we do that we might also
want to rename existing prepared.sql to decode_commit_prepared.sql or
something like that. I think modifying the existing test appears to be
quite ugly and also it is changing the intent of the existing tests.

Updated this. Kept the original prepared.sql untouched and added a new
regression file called two_phase.sql
which is specific to test cases with the new flag two-phase-commit.

QUESTION
Line 120 - I did not really understand the SQL checking the pg_class.
I expected this would be checking table 'test_prepared1' instead. Can
you explain it?
SELECT 'pg_class' AS relation, locktype, mode
FROM pg_locks
WHERE locktype = 'relation'
AND relation = 'pg_class'::regclass;
relation | locktype | mode
----------+----------+------
(0 rows)

;

Yes, I also think your expectation is correct and this should be on
'test_prepared_1'.

Updated

QUESTION
Line 139 - SET statement_timeout = '1s'; is 1 seconds short enough
here for this test, or might it be that these statements would be
completed in less than one seconds anyhow?

;

Good question. I think we have to mention the reason why logical
decoding is not blocked while it needs to acquire a shared lock on the
table and the previous commands already held an exclusive lock on the
table. I am not sure if I am missing something but like you, it is not
clear to me as well what this test intends to do, so surely more
commentary is required.

Updated.

QUESTION
Line 163 - How is this testing a SAVEPOINT? Or is it only to check
that the SAVEPOINT command is not part of the replicated changes?

;

It is more of testing that subtransactions will not create a problem
while decoding.

Updated with a testcase that actually does a rollback to a savepoint

COMMENT
Line 175 - Missing underscore in comment. Code requires also underscore:
"nodecode" --> "_nodecode"

makes sense.

Updated.

==========
Patch V6-0001, File: contrib/test_decoding/test_decoding.c
==========

COMMENT
Line 43
@@ -36,6 +40,7 @@ typedef struct
bool skip_empty_xacts;
bool xact_wrote_changes;
bool only_local;
+ TransactionId check_xid; /* track abort of this txid */
} TestDecodingData;

The "check_xid" seems a meaningless name. Check what?
IIUC maybe should be something like "check_xid_aborted"

Updated.

;

COMMENT
Line 105
@ -88,6 +93,19 @@ static void
pg_decode_stream_truncate(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
int nrelations, Relation relations[],
ReorderBufferChange *change);
+static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,

Remove extra blank line after these functions

;

The above two sounds reasonable suggestions.

Updated.

COMMENT
Line 149
@@ -116,6 +134,11 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_change_cb = pg_decode_stream_change;
cb->stream_message_cb = pg_decode_stream_message;
cb->stream_truncate_cb = pg_decode_stream_truncate;
+ cb->filter_prepare_cb = pg_decode_filter_prepare;
+ cb->prepare_cb = pg_decode_prepare_txn;
+ cb->commit_prepared_cb = pg_decode_commit_prepared_txn;
+ cb->abort_prepared_cb = pg_decode_abort_prepared_txn;
+
}

There is a confusing mix of terminology where sometimes things are
referred as ROLLBACK/rollback and other times apparently the same
operation is referred as ABORT/abort. I do not know the root cause of
this mixture. IIUC maybe the internal functions and protocol generally
use the term "abort", whereas the SQL syntax is "ROLLBACK"... but
where those two terms collide in the middle it gets quite confusing.

At least I thought the names of the "callbacks" which get exposed to
the user (e.g. in the help) might be better if they would match the
SQL.
"abort_prepared_cb" --> "rollback_prepared_db"

This suggestion sounds reasonable. I think it is to entertain the case
where due to error we need to rollback the transaction. I think it is
better if use 'rollback' terminology in the exposed functions. We
already have a function with the name stream_abort_cb in the code
which we also might want to rename but that is a separate thing and we
can deal it with a separate patch.

Changed the call back names from abort_prepared to rollback_prepapred
and stream_abort_prepared to stream_rollback_prepared.

There are similar review comments like this below where the
alternating terms caused me some confusion.

~

Also, Remove the extra blank line before the end of the function.

;

COMMENT
Line 267
@ -227,6 +252,42 @@ pg_decode_startup(LogicalDecodingContext *ctx,
OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "two-phase-commit") == 0)
+ {
+ if (elem->arg == NULL)
+ continue;

IMO the "check-xid" code might be better rearranged so the NULL check
is first instead of if/else.
e.g.
if (elem->arg == NULL)
ereport(FATAL,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("check-xid needs an input value")));
~

Also, is it really supposed to be FATAL instead or ERROR. That is not
the same as the other surrounding code.

;

+1.

Updated.

COMMENT
Line 296
if (data->check_xid <= 0)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("Specify positive value for parameter \"%s\","
" you specified \"%s\"",
elem->defname, strVal(elem->arg))));

The code checking for <= 0 seems over-complicated. Because conversion
was using strtoul() I fail to see how this can ever be < 0. Wouldn't
it be easier to simply test the result of the strtoul() function?

BEFORE: if (errno == EINVAL || errno == ERANGE)
AFTER: if (data->check_xid == 0)

Better to use TransactionIdIsValid(data->check_xid) here.

Updated.

~

Also, should this be FATAL? Everything else similar is ERROR.

;

It should be an error.

Updated

COMMENT
(general)
I don't recall seeing any of these decoding options (e.g.
"two-phase-commit", "check-xid") documented anywhere.
So how can a user even know these options exist so they can use them?
Perhaps options should be described on this page?
https://www.postgresql.org/docs/13/functions-admin.html#FUNCTIONS-REPLICATION

;

I think we should do what we are doing for other options, if they are
not documented then why to document this one separately. I guess we
can make a case to document all the existing options and write a
separate patch for that.

I didnt see any of the test_decoding options updated in the
documentation as these seem specific for the test_decoder used in
testing.
https://www.postgresql.org/docs/13/test-decoding.html

COMMENT
(general)
"check-xid" is a meaningless option name. Maybe something like
"checked-xid-aborted" is more useful?
Suggest changing the member, the option, and the error messages to
match some better name.

Updated.

;

COMMENT
Line 314
@@ -238,6 +299,7 @@ pg_decode_startup(LogicalDecodingContext *ctx,
OutputPluginOptions *opt,
}

ctx->streaming &= enable_streaming;
+ ctx->enable_twophase &= enable_2pc;
}

The "ctx->enable_twophase" is inconsistent naming with the
"ctx->streaming" member.
"enable_twophase" --> "twophase"

;

+1.

Updated

COMMENT
Line 374
@@ -297,6 +359,94 @@ pg_decode_commit_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}

+
+/*
+ * Filter out two-phase transactions.
+ *
+ * Each plugin can implement its own filtering logic. Here
+ * we demonstrate a simple logic by checking the GID. If the
+ * GID contains the "_nodecode" substring, then we filter
+ * it out.
+ */
+static bool
+pg_decode_filter_prepare(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,

Remove the extra preceding blank line.

Updated.

~

I did not find anything in the help about "_nodecode". Should it be
there or is this deliberately not documented feature?

;

I guess we can document it along with filter_prepare API, if not
already documented.

Again , this seems to be specific to test_decoder and an example of a
way to create a filter_prepare.

QUESTION
Line 440
+pg_decode_abort_prepared_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,

Is this a wrong comment
"ABORT PREPARED" --> "ROLLBACK PREPARED" ??

;

COMMENT
Line 620
@@ -455,6 +605,22 @@ pg_decode_change(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
}
data->xact_wrote_changes = true;

+ /* if check_xid is specified */
+ if (TransactionIdIsValid(data->check_xid))
+ {
+ elog(LOG, "waiting for %u to abort", data->check_xid);
+ while (TransactionIdIsInProgress(dat

The check_xid seems a meaningless name, and the comment "/* if
check_xid is specified */" was not helpful either.
IIUC purpose of this is to check that the nominated xid always is rolled back.
So the appropriate name may be more like "check-xid-aborted".

;

Yeah, this part deserves better comments.

Updated.

Other than these first batch of review comments from Peter Smith, I've
also updated new functions in decode.c for DecodeCommitPrepared
and DecodeAbortPrepared as agreed in a previous review comment by
Amit and Dilip.
I've also incorporated Dilip's comment on acquiring SHARED lock rather
than EXCLUSIVE lock while looking for transaction matching Gid.
Since Peter's comments are many, I'll be sending patch updates in
parts addressing his comments.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v7-0003-pgoutput-support-for-logical-decoding-of-2pc.patchapplication/octet-stream; name=v7-0003-pgoutput-support-for-logical-decoding-of-2pc.patch
v7-0001-Support-decoding-of-two-phase-transactions.patchapplication/octet-stream; name=v7-0001-Support-decoding-of-two-phase-transactions.patch
v7-0004-Support-two-phase-commits-in-streaming-mode-of-lo.patchapplication/octet-stream; name=v7-0004-Support-two-phase-commits-in-streaming-mode-of-lo.patch
v7-0002-Tap-test-to-test-concurrent-aborts-during-2-phase.patchapplication/octet-stream; name=v7-0002-Tap-test-to-test-concurrent-aborts-during-2-phase.patch
#55Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Peter Smith (#46)
4 attachment(s)

On Wed, Oct 7, 2020 at 9:36 AM Peter Smith <smithpb2250@gmail.com> wrote:

==========
Patch V6-0001, File: doc/src/sgml/logicaldecoding.sgml
==========

COMMENT/QUESTION
Section 48.6.1
@ -387,6 +387,10 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeFilterPrepareCB filter_prepare_cb;

Confused by the mixing of terminologies "abort" and "rollback".
Why is it LogicalDecodeAbortPreparedCB instead of
LogicalDecodeRollbackPreparedCB?
Why is it abort_prepared_cb instead of rollback_prepared_cb;?

I thought everything the user sees should be ROLLBACK/rollback (like
the SQL) regardless of what the internal functions might be called.

;

Modified.

COMMENT
Section 48.6.1
The begin_cb, change_cb and commit_cb callbacks are required, while
startup_cb, filter_by_origin_cb, truncate_cb, and shutdown_cb are
optional. If truncate_cb is not set but a TRUNCATE is to be decoded,
the action will be ignored.

The 1st paragraph beneath the typedef does not mention the newly added
callbacks to say if they are required or optional.

Added a new para for this.

;

COMMENT
Section 48.6.4.5
Section 48.6.4.6
Section 48.6.4.7
@@ -578,6 +588,55 @@ typedef void (*LogicalDecodeCommitCB) (struct
LogicalDecodingContext *ctx,
</para>
</sect3>

+ <sect3 id="logicaldecoding-output-plugin-prepare">
+    <sect3 id="logicaldecoding-output-plugin-commit-prepared">
+    <sect3 id="logicaldecoding-output-plugin-abort-prepared">
+<programlisting>

The wording and titles are a bit backwards compared to the others.
e.g. previously was "Transaction Begin" (not "Begin Transaction") and
"Transaction End" (not "End Transaction").

So for consistently following the existing IMO should change these new
titles (and wording) to:
- "Commit Prepared Transaction Callback" --> "Transaction Commit
Prepared Callback"
- "Rollback Prepared Transaction Callback" --> "Transaction Rollback
Prepared Callback"
- "whenever a commit prepared transaction has been decoded" -->
"whenever a transaction commit prepared has been decoded"
- "whenever a rollback prepared transaction has been decoded." -->
"whenever a transaction rollback prepared has been decoded."

;

Updated to this

==========
Patch V6-0001, File: src/backend/replication/logical/decode.c
==========

COMMENT
Line 74
@@ -70,6 +70,9 @@ static void DecodeCommit(LogicalDecodingContext
*ctx, XLogRecordBuffer *buf,
xl_xact_parsed_commit *parsed, TransactionId xid);
static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_abort *parsed, TransactionId xid);
+static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
+ xl_xact_parsed_prepare * parsed);

The 2nd line of DecodePrepare is misaligned by one space.

;

COMMENT
Line 321
@@ -312,17 +315,34 @@ DecodeXactOp(LogicalDecodingContext *ctx,
XLogRecordBuffer *buf)
}
break;
case XLOG_XACT_PREPARE:
+ {
+ xl_xact_parsed_prepare parsed;
+ xl_xact_prepare *xlrec;
+ /* check that output plugin is capable of twophase decoding */

"twophase" --> "two-phase"

~

Also, add a blank line after the declarations.

;

==========
Patch V6-0001, File: src/backend/replication/logical/logical.c
==========

COMMENT
Line 249
@@ -225,6 +237,19 @@ StartupDecodingContext(List *output_plugin_options,
(ctx->callbacks.stream_message_cb != NULL) ||
(ctx->callbacks.stream_truncate_cb != NULL);

+ /*
+ * To support two phase logical decoding, we require
prepare/commit-prepare/abort-prepare
+ * callbacks. The filter-prepare callback is optional. We however
enable two phase logical
+ * decoding when at least one of the methods is enabled so that we
can easily identify
+ * missing methods.

The terminology is generally well known as "two-phase" (with the
hyphen) https://en.wikipedia.org/wiki/Two-phase_commit_protocol so
let's be consistent for all the patch code comments. Please search the
code and correct this in all places, even where I might have missed to
identify it.

"two phase" --> "two-phase"

;

COMMENT
Line 822
@@ -782,6 +807,111 @@ commit_cb_wrapper(ReorderBuffer *cache,
ReorderBufferTXN *txn,
}

static void
+prepare_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr prepare_lsn)

"support 2 phase" --> "supports two-phase" in the comment

;

COMMENT
Line 844
Code condition seems strange and/or broken.
if (ctx->enable_twophase && ctx->callbacks.prepare_cb == NULL)
Because if the flag is null then this condition is skipped.
But then if the callback was also NULL then attempting to call it to
"do the actual work" will give NPE.

~

Also, I wonder should this check be the first thing in this function?
Because if it fails does it even make sense that all the errcallback
code was set up?
E.g errcallback.arg potentially is left pointing to a stack variable
on a stack that no longer exists.

Updated accordingly.

;

COMMENT
Line 857
+commit_prepared_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,

"support 2 phase" --> "supports two-phase" in the comment

~

Also, Same potential trouble with the condition:
if (ctx->enable_twophase && ctx->callbacks.commit_prepared_cb == NULL)
Same as previously asked. Should this check be first thing in this function?

;

COMMENT
Line 892
+abort_prepared_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,

"support 2 phase" --> "supports two-phase" in the comment

~

Same potential trouble with the condition:
if (ctx->enable_twophase && ctx->callbacks.abort_prepared_cb == NULL)
Same as previously asked. Should this check be the first thing in this function?

;

COMMENT
Line 1013
@@ -858,6 +988,51 @@ truncate_cb_wrapper(ReorderBuffer *cache,
ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}

+static bool
+filter_prepare_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ TransactionId xid, const char *gid)

Fix wording in comment:
"twophase" --> "two-phase transactions"
"twophase transactions" --> "two-phase transactions"

Updated accordingly.

==========
Patch V6-0001, File: src/backend/replication/logical/reorderbuffer.c
==========

COMMENT
Line 255
@@ -251,7 +251,8 @@ static Size
ReorderBufferRestoreChanges(ReorderBuffer *rb, ReorderBufferTXN *txn
static void ReorderBufferRestoreChange(ReorderBuffer *rb,
ReorderBufferTXN *txn,
char *change);
static void ReorderBufferRestoreCleanup(ReorderBuffer *rb,
ReorderBufferTXN *txn);
-static void ReorderBufferTruncateTXN(ReorderBuffer *rb, ReorderBufferTXN *txn);
+static void ReorderBufferTruncateTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ bool txn_prepared);

The alignment is inconsistent. One more space needed before "bool txn_prepared"

;

COMMENT
Line 417
@@ -413,6 +414,11 @@ ReorderBufferReturnTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn)
}

/* free data that's contained */
+ if (txn->gid != NULL)
+ {
+ pfree(txn->gid);
+ txn->gid = NULL;
+ }

Should add the blank link before this new code, as it was before.

;

COMMENT
Line 1564
@ -1502,12 +1561,14 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn)
}

/*
- * Discard changes from a transaction (and subtransactions), after streaming
- * them. Keep the remaining info - transactions, tuplecids, invalidations and
- * snapshots.
+ * Discard changes from a transaction (and subtransactions), either
after streaming or
+ * after a PREPARE.

typo "snapshots.If" -> "snapshots. If"

;

Updated Accordingly.

COMMENT/QUESTION
Line 1590
@@ -1526,7 +1587,7 @@ ReorderBufferTruncateTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn)
Assert(rbtxn_is_known_subxact(subtxn));
Assert(subtxn->nsubtxns == 0);

- ReorderBufferTruncateTXN(rb, subtxn);
+ ReorderBufferTruncateTXN(rb, subtxn, txn_prepared);
}

There are some code paths here I did not understand how they match the comments.
Because this function is recursive it seems that it may be called
where the 2nd parameter txn is a sub-transaction.

But then this seems at odds with some of the other code comments of
this function which are processing the txn without ever testing is it
really toplevel or not:

e.g. Line 1593 "/* cleanup changes in the toplevel txn */"
e.g. Line 1632 "They are always stored in the toplevel transaction."

;

I see that another commit in between has updated this now.

COMMENT
Line 1644
@@ -1560,9 +1621,33 @@ ReorderBufferTruncateTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn)
* about the toplevel xact (we send the XID in all messages), but we never
* stream XIDs of empty subxacts.
*/
- if ((!txn->toptxn) || (txn->nentries_mem != 0))
+ if ((!txn_prepared) && ((!txn->toptxn) || (txn->nentries_mem != 0)))
txn->txn_flags |= RBTXN_IS_STREAMED;

+ if (txn_prepared)

/* remove the change from it's containing list */
typo "it's" --> "its"

Updated.

;

QUESTION
Line 1977
@@ -1880,7 +1965,7 @@ ReorderBufferResetTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
ReorderBufferChange *specinsert)
{
/* Discard the changes that we just streamed */
- ReorderBufferTruncateTXN(rb, txn);
+ ReorderBufferTruncateTXN(rb, txn, false);

How do you know the 3rd parameter - i.e. txn_prepared - should be
hardwired false here?
e.g. I thought that maybe rbtxn_prepared(txn) can be true here.

;

This particular function is only called when streaming and not when
handling a prepared transaction.

COMMENT
Line 2345
@@ -2249,7 +2334,6 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
break;
}
}
-
/*

Looks like accidental blank line deletion. This should be put back how it was

;

COMMENT/QUESTION
Line 2374
@@ -2278,7 +2362,16 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
}
}
else
- rb->commit(rb, txn, commit_lsn);
+ {
+ /*
+ * Call either PREPARE (for twophase transactions) or COMMIT
+ * (for regular ones).

"twophase" --> "two-phase"

~

Updated.

Also, I was confused by the apparent assumption of exclusiveness of
streaming and 2PC...
e.g. what if streaming AND 2PC then it won't do rb->prepare()

;

QUESTION
Line 2424
@@ -2319,11 +2412,17 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
*/
if (streaming)
{
- ReorderBufferTruncateTXN(rb, txn);
+ ReorderBufferTruncateTXN(rb, txn, false);

/* Reset the CheckXidAlive */
CheckXidAlive = InvalidTransactionId;
}
+ else if (rbtxn_prepared(txn))

I was confused by the exclusiveness of streaming/2PC.
e.g. what if streaming AND 2PC at same time - how can you pass false
as 3rd param to ReorderBufferTruncateTXN?

ReorderBufferProcessTXN can only be called when streaming individual
commands and not for streaming a prepare or a commit, Streaming of
prepare and commit would be handled as part of
ReorderBufferStreamCommit.

;

COMMENT
Line 2463
@@ -2352,17 +2451,18 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,

/*
* The error code ERRCODE_TRANSACTION_ROLLBACK indicates a concurrent
- * abort of the (sub)transaction we are streaming. We need to do the
+ * abort of the (sub)transaction we are streaming or preparing. We
need to do the
* cleanup and return gracefully on this error, see SetupCheckXidLive.
*/

"twoi phase" --> "two-phase"

;

QUESTIONS
Line 2482
@@ -2370,10 +2470,19 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,
ReorderBufferTXN *txn,
errdata = NULL;
curtxn->concurrent_abort = true;

- /* Reset the TXN so that it is allowed to stream remaining data. */
- ReorderBufferResetTXN(rb, txn, snapshot_now,
- command_id, prev_lsn,
- specinsert);
+ /* If streaming, reset the TXN so that it is allowed to stream
remaining data. */
+ if (streaming)

Re: /* If streaming, reset the TXN so that it is allowed to stream
remaining data. */
I was confused by the exclusiveness of streaming/2PC.
Is it not possible for streaming flags and rbtxn_prepared(txn) true at
the same time?

Same as above.

~

elog(LOG, "stopping decoding of %s (%u)",
txn->gid[0] != '\0'? txn->gid:"", txn->xid);

Is this a safe operation, or do you also need to test txn->gid is not NULL?

Since this is in code where it is not streaming and therefore
rbtxn_prepared(txn), so gid has to be NOT NULL.

;

COMMENT
Line 2606
+ReorderBufferPrepare(ReorderBuffer *rb, TransactionId xid,

"twophase" --> "two-phase"

;

QUESTION
Line 2655
+ReorderBufferFinishPrepared(ReorderBuffer *rb, TransactionId xid,

"This is used to handle COMMIT/ABORT PREPARED"
Should that say "COMMIT/ROLLBACK PREPARED"?

;

COMMENT
Line 2668

"Anyways, 2PC transactions" --> "Anyway, two-phase transactions"

;

COMMENT
Line 2765
@@ -2495,7 +2731,13 @@ ReorderBufferAbort(ReorderBuffer *rb,
TransactionId xid, XLogRecPtr lsn)
/* cosmetic... */
txn->final_lsn = lsn;

- /* remove potential on-disk data, and deallocate */
+ /*
+ * remove potential on-disk data, and deallocate.
+ *

Remove the blank between the comment and code.

==========
Patch V6-0001, File: src/include/replication/logical.h
==========

COMMENT
Line 89

"two phase" -> "two-phase"

;

COMMENT
Line 89

For consistency with the previous member naming really the new member
should just be called "twophase" rather than "enable_twophase"

;

Updated accordingly.

==========
Patch V6-0001, File: src/include/replication/output_plugin.h
==========

QUESTION
Line 106

As previously asked, why is the callback function/typedef referred as
AbortPrepared instead of RollbackPrepared?
It does not match the SQL and the function comment, and seems only to
add some unnecessary confusion.

;

==========
Patch V6-0001, File: src/include/replication/reorderbuffer.h
==========

QUESTION
Line 116
@@ -162,9 +163,13 @@ typedef struct ReorderBufferChange
#define RBTXN_HAS_CATALOG_CHANGES 0x0001
#define RBTXN_IS_SUBXACT 0x0002
#define RBTXN_IS_SERIALIZED 0x0004
-#define RBTXN_IS_STREAMED 0x0008
-#define RBTXN_HAS_TOAST_INSERT 0x0010
-#define RBTXN_HAS_SPEC_INSERT 0x0020
+#define RBTXN_PREPARE 0x0008
+#define RBTXN_COMMIT_PREPARED 0x0010
+#define RBTXN_ROLLBACK_PREPARED 0x0020
+#define RBTXN_COMMIT 0x0040
+#define RBTXN_IS_STREAMED 0x0080
+#define RBTXN_HAS_TOAST_INSERT 0x0100
+#define RBTXN_HAS_SPEC_INSERT 0x0200

I was wondering why when adding new flags, some of the existing flag
masks were also altered.
I am assuming this is ok because they are never persisted but are only
used in the protocol (??)

;

COMMENT
Line 226
@@ -218,6 +223,15 @@ typedef struct ReorderBufferChange
((txn)->txn_flags & RBTXN_IS_STREAMED) != 0 \
)

+/* is this txn prepared? */
+#define rbtxn_prepared(txn) (txn->txn_flags & RBTXN_PREPARE)
+/* was this prepared txn committed in the meanwhile? */
+#define rbtxn_commit_prepared(txn) (txn->txn_flags & RBTXN_COMMIT_PREPARED)
+/* was this prepared txn aborted in the meanwhile? */
+#define rbtxn_rollback_prepared(txn) (txn->txn_flags & RBTXN_ROLLBACK_PREPARED)
+/* was this txn committed in the meanwhile? */
+#define rbtxn_commit(txn) (txn->txn_flags & RBTXN_COMMIT)
+

Probably all the "txn->txn_flags" here might be more safely written
with parentheses in the macro like "(txn)->txn_flags".

~

Also, Start all comments with capital. And what is the meaning "in the
meanwhile?"

;

COMMENT
Line 410
@@ -390,6 +407,39 @@ typedef void (*ReorderBufferCommitCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn,
XLogRecPtr commit_lsn);

The format is inconsistent with all other callback signatures here,
where the 1st arg was on the same line as the typedef.

;

COMMENT
Line 440-442

Excessive blank lines following this change?

;

COMMENT
Line 638
@@ -571,6 +631,15 @@ void
ReorderBufferXidSetCatalogChanges(ReorderBuffer *, TransactionId xid,
XLog
bool ReorderBufferXidHasCatalogChanges(ReorderBuffer *, TransactionId xid);
bool ReorderBufferXidHasBaseSnapshot(ReorderBuffer *, TransactionId xid);

+bool ReorderBufferPrepareNeedSkip(ReorderBuffer *rb, TransactionId xid,
+ const char *gid);
+bool ReorderBufferTxnIsPrepared(ReorderBuffer *rb, TransactionId xid,
+ const char *gid);
+void ReorderBufferPrepare(ReorderBuffer *rb, TransactionId xid,
+ XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
+ TimestampTz commit_time,
+ RepOriginId origin_id, XLogRecPtr origin_lsn,
+ char *gid);

Not aligned consistently with other function prototypes.

;

Updated

==========
Patch V6-0003, File: src/backend/access/transam/twophase.c
==========

COMMENT
Line 551
@@ -548,6 +548,37 @@ MarkAsPrepared(GlobalTransaction gxact, bool lock_held)
}

/*
+ * LookupGXact
+ * Check if the prepared transaction with the given GID is around
+ */
+bool
+LookupGXact(const char *gid)

There is potential to refactor/simplify this code:
e.g.

bool
LookupGXact(const char *gid)
{
int i;
bool found = false;

LWLockAcquire(TwoPhaseStateLock, LW_EXCLUSIVE);
for (i = 0; i < TwoPhaseState->numPrepXacts; i++)
{
GlobalTransaction gxact = TwoPhaseState->prepXacts[i];
/* Ignore not-yet-valid GIDs */
if (gxact->valid && strcmp(gxact->gid, gid) == 0)
{
found = true;
break;
}
}
LWLockRelease(TwoPhaseStateLock);
return found;
}

;

Updated accordingly.

==========
Patch V6-0003, File: src/backend/replication/logical/proto.c
==========

COMMENT
Line 86
@@ -72,12 +72,17 @@ logicalrep_read_begin(StringInfo in,
LogicalRepBeginData *begin_data)
*/
void
logicalrep_write_commit(StringInfo out, ReorderBufferTXN *txn,
- XLogRecPtr commit_lsn)

Since now the flags are used the code comment is wrong.
"/* send the flags field (unused for now) */"

;

COMMENT
Line 129
@ -106,6 +115,77 @@ logicalrep_read_commit(StringInfo in,
LogicalRepCommitData *commit_data)
}

/*
+ * Write PREPARE to the output stream.
+ */
+void
+logicalrep_write_prepare(StringInfo out, ReorderBufferTXN *txn,

"2PC transactions" --> "two-phase commit transactions"

;

Updated

COMMENT
Line 133

Assert(strlen(txn->gid) > 0);
Shouldn't that assertion also check txn->gid is not NULL (to prevent
NPE in case gid was NULL)

In this case txn->gid has to be non NULL.

;

COMMENT
Line 177
+logicalrep_read_prepare(StringInfo in, LogicalRepPrepareData * prepare_data)

prepare_data->prepare_type = flags;
This code may be OK but it does seem a bit of an abuse of the flags.

e.g. Are they flags or are the really enum values?
e.g. And if they are effectively enums (it appears they are) then
seemed inconsistent that |= was used when they were previously
assigned.

;

I have not updated this as according to Amit this might require
refactoring again.

==========
Patch V6-0003, File: src/backend/replication/logical/worker.c
==========

COMMENT
Line 757
@@ -749,6 +753,141 @@ apply_handle_commit(StringInfo s)
pgstat_report_activity(STATE_IDLE, NULL);
}

+static void
+apply_handle_prepare_txn(LogicalRepPrepareData * prepare_data)
+{
+ Assert(prepare_data->prepare_lsn == remote_final_lsn);

Missing function comment to say this is called from apply_handle_prepare.

;

COMMENT
Line 798
+apply_handle_commit_prepared_txn(LogicalRepPrepareData * prepare_data)

Missing function comment to say this is called from apply_handle_prepare.

;

COMMENT
Line 824
+apply_handle_rollback_prepared_txn(LogicalRepPrepareData * prepare_data)

Missing function comment to say this is called from apply_handle_prepare.

Updated.

==========
Patch V6-0003, File: src/backend/replication/pgoutput/pgoutput.c
==========

COMMENT
Line 50
@@ -47,6 +47,12 @@ static void pgoutput_truncate(LogicalDecodingContext *ctx,
ReorderBufferChange *change);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
+static void pgoutput_prepare_txn(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr prepare_lsn);

The parameter indentation (2nd lines) does not match everything else
in this context.

;

COMMENT
Line 152
@@ -143,6 +149,10 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->commit_cb = pgoutput_commit_txn;
+
+ cb->prepare_cb = pgoutput_prepare_txn;
+ cb->commit_prepared_cb = pgoutput_commit_prepared_txn;
+ cb->abort_prepared_cb = pgoutput_abort_prepared_txn;

Remove the unnecessary blank line.

;

QUESTION
Line 386
@@ -373,7 +383,49 @@ pgoutput_commit_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
OutputPluginUpdateProgress(ctx);

OutputPluginPrepareWrite(ctx, true);
- logicalrep_write_commit(ctx->out, txn, commit_lsn);
+ logicalrep_write_commit(ctx->out, txn, commit_lsn, true);

Is the is_commit parameter of logicalrep_write_commit ever passed as false?
If yes, where?
If no, the what is the point of it?

It was dead code from an earlier version. I have removed it, updated
accordingly.

;

COMMENT
Line 408
+pgoutput_commit_prepared_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,

Since all this function is identical to pg_output_prepare it might be
better to either
1. just leave this as a wrapper to delegate to that function
2. remove this one entirely and assign the callback to the common
pgoutput_prepare_txn

;

I have not changed this as this might require re-factoring according to Amit.

COMMENT
Line 419
+pgoutput_abort_prepared_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,

Since all this function is identical to pg_output_prepare if might be
better to either
1. just leave this as a wrapper to delegate to that function
2. remove this one entirely and assign the callback to the common
pgoutput_prepare_tx

;

Same as above.

COMMENT
Line 419
+pgoutput_abort_prepared_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,

Shouldn't this comment say be "ROLLBACK PREPARED"?

;

Updated.

==========
Patch V6-0003, File: src/include/replication/logicalproto.h
==========

QUESTION
Line 101
@@ -87,20 +87,55 @@ typedef struct LogicalRepBeginData
TransactionId xid;
} LogicalRepBeginData;

+/* Commit (and abort) information */

#define LOGICALREP_IS_ABORT 0x02
Is there a good reason why this is not called:
#define LOGICALREP_IS_ROLLBACK 0x02

;

Removed.

COMMENT
Line 105

((flags == LOGICALREP_IS_COMMIT) || (flags == LOGICALREP_IS_ABORT))

Macros would be safer if flags are in parentheses
(((flags) == LOGICALREP_IS_COMMIT) || ((flags) == LOGICALREP_IS_ABORT))

;

COMMENT
Line 115

Unexpected whitespace for the typedef
"} LogicalRepPrepareData;"

;

COMMENT
Line 122
/* prepare can be exactly one of PREPARE, [COMMIT|ABORT] PREPARED*/
#define PrepareFlagsAreValid(flags) \
((flags == LOGICALREP_IS_PREPARE) || \
(flags == LOGICALREP_IS_COMMIT_PREPARED) || \
(flags == LOGICALREP_IS_ROLLBACK_PREPARED))

There is confusing mixture in macros and comments of ABORT and ROLLBACK terms
"[COMMIT|ABORT] PREPARED" --> "[COMMIT|ROLLBACK] PREPARED"

~

Also, it would be safer if flags are in parentheses
(((flags) == LOGICALREP_IS_PREPARE) || \
((flags) == LOGICALREP_IS_COMMIT_PREPARED) || \
((flags) == LOGICALREP_IS_ROLLBACK_PREPARED))

;

updated.

==========
Patch V6-0003, File: src/test/subscription/t/020_twophase.pl
==========

COMMENT
Line 131 - # check inserts are visible

Isn't this supposed to be checking for rows 12 and 13, instead of 11 and 12?

;

Updated.

==========
Patch V6-0004, File: contrib/test_decoding/test_decoding.c
==========

COMMENT
Line 81
@@ -78,6 +78,15 @@ static void
pg_decode_stream_stop(LogicalDecodingContext *ctx,
static void pg_decode_stream_abort(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
XLogRecPtr abort_lsn);
+static void pg_decode_stream_prepare(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn);
+static

All these functions have a 3rd parameter called commit_lsn. Even
though the functions are not commit related. It seems like a cut/paste
error.

;

COMMENT
Line 142
@@ -130,6 +139,9 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_start_cb = pg_decode_stream_start;
cb->stream_stop_cb = pg_decode_stream_stop;
cb->stream_abort_cb = pg_decode_stream_abort;
+ cb->stream_prepare_cb = pg_decode_stream_prepare;
+ cb->stream_commit_prepared_cb = pg_decode_stream_commit_prepared;
+ cb->stream_abort_prepared_cb = pg_decode_stream_abort_prepared;
cb->stream_commit_cb = pg_decode_stream_commit;

Can the "cb->stream_abort_prepared_cb" be changed to
"cb->stream_rollback_prepared_cb"?

;

COMMENT
Line 827
@@ -812,6 +824,78 @@ pg_decode_stream_abort(LogicalDecodingContext *ctx,
}

static void
+pg_decode_stream_prepare(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn)
+{
+ TestDecodingData *data = ctx->output_plugin_pr

The commit_lsn (3rd parameter) is unused and seems like a cut/paste name error.

;

COMMENT
Line 875
+pg_decode_stream_abort_prepared(LogicalDecodingContext *ctx,

The commit_lsn (3rd parameter) is unused and seems like a cut/paste name error.

;

Updated.

==========
Patch V6-0004, File: doc/src/sgml/logicaldecoding.sgml
==========

COMMENT
48.6.1
@@ -396,6 +396,9 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamStartCB stream_start_cb;
LogicalDecodeStreamStopCB stream_stop_cb;
LogicalDecodeStreamAbortCB stream_abort_cb;
+ LogicalDecodeStreamPrepareCB stream_prepare_cb;
+ LogicalDecodeStreamCommitPreparedCB stream_commit_prepared_cb;
+ LogicalDecodeStreamAbortPreparedCB stream_abort_prepared_cb;

Same question from previous review comments - why using the
terminology "abort" instead of "rollback"

;

COMMENT
48.6.1
@@ -418,7 +421,9 @@ typedef void (*LogicalOutputPluginInit) (struct
OutputPluginCallbacks *cb);
in-progress transactions. The <function>stream_start_cb</function>,
<function>stream_stop_cb</function>, <function>stream_abort_cb</function>,
<function>stream_commit_cb</function> and <function>stream_change_cb</function>
- are required, while <function>stream_message_cb</function> and
+ are required, while <function>stream_message_cb</function>,
+ <function>stream_prepare_cb</function>,
<function>stream_commit_prepared_cb</function>,
+ <function>stream_abort_prepared_cb</function>,

Missing "and".
... "stream_abort_prepared_cb, stream_truncate_cb are optional." -->
"stream_abort_prepared_cb, and stream_truncate_cb are optional."

;

COMMENT
Section 48.6.4.16
Section 48.6.4.17
Section 48.6.4.18
@@ -839,6 +844,45 @@ typedef void (*LogicalDecodeStreamAbortCB)
(struct LogicalDecodingContext *ctx,
</para>
</sect3>

+ <sect3 id="logicaldecoding-output-plugin-stream-prepare">
+ <title>Stream Prepare Callback</title>
+ <para>
+ The <function>stream_prepare_cb</function> callback is called to prepare
+ a previously streamed transaction as part of a two phase commit.
+<programlisting>
+typedef void (*LogicalDecodeStreamPrepareCB) (struct
LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr abort_lsn);
+</programlisting>
+ </para>
+ </sect3>
+
+ <sect3 id="logicaldecoding-output-plugin-stream-commit-prepared">
+ <title>Stream Commit Prepared Callback</title>
+ <para>
+ The <function>stream_commit_prepared_cb</function> callback is
called to commit prepared
+ a previously streamed transaction as part of a two phase commit.
+<programlisting>
+typedef void (*LogicalDecodeStreamCommitPreparedCB) (struct
LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr abort_lsn);
+</programlisting>
+ </para>
+ </sect3>
+
+ <sect3 id="logicaldecoding-output-plugin-stream-abort-prepared">
+ <title>Stream Abort Prepared Callback</title>
+ <para>
+ The <function>stream_abort_prepared_cb</function> callback is called
to abort prepared
+ a previously streamed transaction as part of a two phase commit.
+<programlisting>
+typedef void (*LogicalDecodeStreamAbortPreparedCB) (struct
LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr abort_lsn);
+</programlisting>
+ </para>
+ </sect3>

1. Everywhere it says "two phase" commit should be consistently
replaced to say "two-phase" commit (with the hyphen)

2. Search for "abort_lsn" parameter. It seems to be overused
(cut/paste error) even when the API is unrelated to abort

3. 48.6.4.17 and 48.6.4.18
Is this wording ok? Is the word "prepared" even necessary here?
- "... called to commit prepared a previously streamed transaction ..."
- "... called to abort prepared a previously streamed transaction ..."

;

Updated accordingly.

COMMENT
Section 48.9
@@ -1017,9 +1061,13 @@ OutputPluginWrite(ctx, true);
When streaming an in-progress transaction, the changes (and messages) are
streamed in blocks demarcated by <function>stream_start_cb</function>
and <function>stream_stop_cb</function> callbacks. Once all the decoded
- changes are transmitted, the transaction is committed using the
- <function>stream_commit_cb</function> callback (or possibly aborted using
- the <function>stream_abort_cb</function> callback).
+ changes are transmitted, the transaction can be committed using the
+ the <function>stream_commit_cb</function> callback

"two phase" --> "two-phase"

~

Also, Missing period on end of sentence.
"or aborted using the stream_abort_prepared_cb" --> "or aborted using
the stream_abort_prepared_cb."

;

Updated accordingly.

==========
Patch V6-0004, File: src/backend/replication/logical/logical.c
==========

COMMENT
Line 84
@@ -81,6 +81,12 @@ static void stream_stop_cb_wrapper(ReorderBuffer
*cache, ReorderBufferTXN *txn,
XLogRecPtr last_lsn);
static void stream_abort_cb_wrapper(ReorderBuffer *cache,
ReorderBufferTXN *txn,
XLogRecPtr abort_lsn);
+static void stream_prepare_cb_wrapper(ReorderBuffer *cache,
ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn);
+static void stream_commit_prepared_cb_wrapper(ReorderBuffer *cache,
ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn);
+static void stream_abort_prepared_cb_wrapper(ReorderBuffer *cache,
ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn);

The 3rd parameter is always "commit_lsn" even for API unrelated to
commit, so seems like cut/paste error.

;

COMMENT
Line 1246
@@ -1231,6 +1243,105 @@ stream_abort_cb_wrapper(ReorderBuffer *cache,
ReorderBufferTXN *txn,
}

static void
+stream_prepare_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;

Misnamed parameter "commit_lsn" ?

~

Also, Line 1272
There seem to be some missing integrity checking to make sure the
callback is not NULL.
A null callback will give NPE when wrapper attempts to call it

;

COMMENT
Line 1305
+static void
+stream_commit_prepared_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,

There seem to be some missing integrity checking to make sure the
callback is not NULL.
A null callback will give NPE when wrapper attempts to call it.

;

COMMENT
Line 1312
+static void
+stream_abort_prepared_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,

Misnamed parameter "commit_lsn" ?

~

Also, Line 1338
There seem to be some missing integrity checking to make sure the
callback is not NULL.
A null callback will give NPE when wrapper attempts to call it.

Updated accordingly.

==========
Patch V6-0004, File: src/backend/replication/logical/reorderbuffer.c
==========

COMMENT
Line 2684
@@ -2672,15 +2681,31 @@ ReorderBufferFinishPrepared(ReorderBuffer *rb,
TransactionId xid,
txn->gid = palloc(strlen(gid) + 1); /* trailing '\0' */
strcpy(txn->gid, gid);

- if (is_commit)
+ if (rbtxn_is_streamed(txn))
{
- txn->txn_flags |= RBTXN_COMMIT_PREPARED;
- rb->commit_prepared(rb, txn, commit_lsn);
+ if (is_commit)
+ {
+ txn->txn_flags |= RBTXN_COMMIT_PREPARED;

The setting/checking of the flags could be refactored if you wanted to
write less code:
e.g.
if (is_commit)
txn->txn_flags |= RBTXN_COMMIT_PREPARED;
else
txn->txn_flags |= RBTXN_ROLLBACK_PREPARED;

if (rbtxn_is_streamed(txn) && rbtxn_commit_prepared(txn))
rb->stream_commit_prepared(rb, txn, commit_lsn);
else if (rbtxn_is_streamed(txn) && rbtxn_rollback_prepared(txn))
rb->stream_abort_prepared(rb, txn, commit_lsn);
else if (rbtxn_commit_prepared(txn))
rb->commit_prepared(rb, txn, commit_lsn);
else if (rbtxn_rollback_prepared(txn))
rb->abort_prepared(rb, txn, commit_lsn);

;

Updated accordingly.

==========
Patch V6-0004, File: src/include/replication/output_plugin.h
==========

COMMENT
Line 171
@@ -157,6 +157,33 @@ typedef void (*LogicalDecodeStreamAbortCB)
(struct LogicalDecodingContext *ctx,
XLogRecPtr abort_lsn);

/*
+ * Called to prepare changes streamed to remote node from in-progress
+ * transaction. This is called as part of a two-phase commit and only when
+ * two-phased commits are supported
+ */

1. Missing period all these comments.

2. Is the part that says "and only where two-phased commits are
supported" necessary to say? Is seems redundant since comments already
says called as part of a two-phase commit.

;

==========
Patch V6-0004, File: src/include/replication/reorderbuffer.h
==========

COMMENT
Line 467
@@ -466,6 +466,24 @@ typedef void (*ReorderBufferStreamAbortCB) (
ReorderBufferTXN *txn,
XLogRecPtr abort_lsn);

+/* prepare streamed transaction callback signature */
+typedef void (*ReorderBufferStreamPrepareCB) (
+ ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn);
+
+/* prepare streamed transaction callback signature */
+typedef void (*ReorderBufferStreamCommitPreparedCB) (
+ ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn);
+
+/* prepare streamed transaction callback signature */
+typedef void (*ReorderBufferStreamAbortPreparedCB) (
+ ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn);

Cut/paste error - repeated same comment 3 times?

Updated Accordingly.

[END]

I believe I have addressed all of Peter's comments. Peter, do have a
look and let me know if I missed anything or if you find anythinge
else. Thanks for your comments, much appreciated.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v8-0001-Support-decoding-of-two-phase-transactions.patchapplication/octet-stream; name=v8-0001-Support-decoding-of-two-phase-transactions.patch
v8-0002-Tap-test-to-test-concurrent-aborts-during-2-phase.patchapplication/octet-stream; name=v8-0002-Tap-test-to-test-concurrent-aborts-during-2-phase.patch
v8-0004-Support-two-phase-commits-in-streaming-mode-in-lo.patchapplication/octet-stream; name=v8-0004-Support-two-phase-commits-in-streaming-mode-in-lo.patch
v8-0003-pgoutput-output-plugin-support-for-logical-decodi.patchapplication/octet-stream; name=v8-0003-pgoutput-output-plugin-support-for-logical-decodi.patch
#56Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#55)

On Wed, Oct 14, 2020 at 6:15 PM Ajin Cherian <itsajin@gmail.com> wrote:

I think it will be easier to review this work if we can split the
patches according to the changes made in different layers. The first
patch could be changes made in output plugin and the corresponding
changes in test_decoding, see the similar commit of in-progress
transactions [1]https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=45fdc9738b36d1068d3ad8fdb06436d6fd14436b. So you need to move corresponding changes from
v8-0001-Support-decoding-of-two-phase-transactions and
v8-0004-Support-two-phase-commits-in-streaming-mode-in-lo for this.
The second patch could be changes made in ReorderBuffer to support
this feature, see [2]https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=7259736a6e5b7c7588fff9578370736a6648acbb. The third patch could be changes made to
support pgoutput and subscriber-side stuff, see [3]https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=464824323e57dc4b397e8b05854d779908b55304. What do you
think?

[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=45fdc9738b36d1068d3ad8fdb06436d6fd14436b
[2]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=7259736a6e5b7c7588fff9578370736a6648acbb
[3]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=464824323e57dc4b397e8b05854d779908b55304

--
With Regards,
Amit Kapila.

#57Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#56)
3 attachment(s)

On Thu, Oct 15, 2020 at 2:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Oct 14, 2020 at 6:15 PM Ajin Cherian <itsajin@gmail.com> wrote:

I think it will be easier to review this work if we can split the
patches according to the changes made in different layers. The first
patch could be changes made in output plugin and the corresponding
changes in test_decoding, see the similar commit of in-progress
transactions [1]. So you need to move corresponding changes from
v8-0001-Support-decoding-of-two-phase-transactions and
v8-0004-Support-two-phase-commits-in-streaming-mode-in-lo for this.
The second patch could be changes made in ReorderBuffer to support
this feature, see [2]. The third patch could be changes made to
support pgoutput and subscriber-side stuff, see [3]. What do you
think?

I agree. I have split the patches accordingly. Do have a look.
Pending work is:
1. Add pgoutput support for the new streaming two-phase commit APIs
2. Add test cases for two-phase commits with streaming for pub/sub and
test_decoding
3. Add CREATE SUBSCRIPTION command option to specify two-phase commits
rather than having it turned on by default.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v9-0001-Support-decoding-of-two-phase-transactions.patchapplication/octet-stream; name=v9-0001-Support-decoding-of-two-phase-transactions.patch
v9-0003-pgoutput-plugin-support-for-logical-decoding-of-t.patchapplication/octet-stream; name=v9-0003-pgoutput-plugin-support-for-logical-decoding-of-t.patch
v9-0002-Backend-support-for-logical-decoding-of-two-phase.patchapplication/octet-stream; name=v9-0002-Backend-support-for-logical-decoding-of-two-phase.patch
#58Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Ajin Cherian (#57)
3 attachment(s)

Hello Ajin,

The v9 patches provided support for two-phase transactions for NON-streaming.

Now I have added STREAM support for two-phase transactions, and bumped
all patches to version v10.

(The 0001 and 0002 patches are unchanged. Only 0003 is changed).

--

There are a few TODO/FIXME comments in the code highlighting parts
needing some attention.

There is a #define DEBUG_STREAM_2PC useful for debugging, which I can
remove later.

All the patches have some whitespaces issues when applied. We can
resolve them as we go.

Please let me know any comments/feedback.

Kind Regards
Peter Smith.
Fujitsu Australia.

Attachments:

v10-0001-Support-2PC-txn-base.patchapplication/octet-stream; name=v10-0001-Support-2PC-txn-base.patch
v10-0002-Support-2PC-txn-backend-and-tests.patchapplication/octet-stream; name=v10-0002-Support-2PC-txn-backend-and-tests.patch
v10-0003-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v10-0003-Support-2PC-txn-pgoutput.patch
#59Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#58)

Hello Ajin.

I have gone through the v10 patches to verify if and how my previous
v6 review comments got addressed.

Some issues remain, and there are a few newly introduced ones.

Mostly it is all very minor stuff.

Please find my revised review comments below.

Kind Regards.
Peter Smith
Fujitsu Australia

---

V10 REVIEW COMMENTS FOLLOW

==========
Patch v10-0001, File: contrib/test_decoding/test_decoding.c
==========

COMMENT
Line 285
+ {
+ errno = 0;
+ data->check_xid_aborted = (TransactionId)
+ strtoul(strVal(elem->arg), NULL, 0);
+
+ if (!TransactionIdIsValid(data->check_xid_aborted))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("check-xid-aborted is not a valid xid: \"%s\"",
+ strVal(elem->arg))));
+ }

I think it is risky to assign strtoul directly to the
check_xid_aborted member because it makes some internal assumption
that the invalid transaction is the same as the error return from
strtoul.

Maybe better to do in 2 steps like below:

BEFORE
errno = 0;
data->check_xid_aborted = (TransactionId)strtoul(strVal(elem->arg), NULL, 0);

AFTER
long xid;
errno = 0;
xid = strtoul(strVal(elem->arg), NULL, 0);
if (xid == 0 || errno != 0)
data->check_xid_aborted = InvalidTransactionId;
else
data->check_xid_aborted =(TransactionId)xid;

---

COMMENT
Line 430
+
+/* ABORT PREPARED callback */
+static void
+pg_decode_rollback_prepared_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
+ XLogRecPtr abort_lsn)

Fix comment "ABORT PREPARED" --> "ROLLBACK PREPARED"

==========
Patch v10-0001, File: doc/src/sgml/logicaldecoding.sgml
==========

COMMENT
Section 48.6.1
Says:
An output plugin may also define functions to support streaming of
large, in-progress transactions. The stream_start_cb, stream_stop_cb,
stream_abort_cb, stream_commit_cb and stream_change_cb are required,
while stream_message_cb, stream_prepare_cb, stream_commit_prepared_cb,
stream_rollback_prepared_cb and stream_truncate_cb are optional.

An output plugin may also define functions to support two-phase
commits, which are decoded on PREPARE TRANSACTION. The prepare_cb,
commit_prepared_cb and rollback_prepared_cb callbacks are required,
while filter_prepare_cb is optional.

-

But is that correct? It seems strange/inconsistent to say that the 2PC
callbacks are mandatory for the non-streaming, but that they are
optional for streaming.

---

COMMENT
48.6.4.5 "Transaction Prepare Callback"
48.6.4.6 "Transaction Commit Prepared Callback"
48.6.4.7 "Transaction Rollback Prepared Callback"

There seems some confusion about what is optional and what is
mandatory. e.g. Why are the non-stream 2PC callbacks mandatory but the
stream 2PC callbacks are not? And also there is some inconsistency
with what is said in the paragraph at the top of the page versus what
each of the callback sections says wrt optional/mandatory.

The sub-sections 49.6.4.5, 49.6.4.6, 49.6.4.7 say those callbacks are
optional which IIUC Amit said is incorrect. This is similar to the
previous review comment

---

COMMENT
Section 48.6.4.7 "Transaction Rollback Prepared Callback"

parameter "abort_lsn" probably should be "rollback_lsn"

---

COMMENT
Section 49.6.4.18. "Stream Rollback Prepared Callback"
Says:
The stream_rollback_prepared_cb callback is called to abort a
previously streamed transaction as part of a two-phase commit.

maybe should say "is called to rollback"

==========
Patch v10-0001, File: src/backend/replication/logical/logical.c
==========

COMMENT
Line 252
Says: We however enable two phase logical...

"two phase" --> "two-phase"

--

COMMENT
Line 885
Line 923
Says: If the plugin support 2 phase commits...

"support 2 phase" --> "supports two-phase" in the comment. Same issue
occurs twice.

---

COMMENT
Line 830
Line 868
Line 906
Says:
/* We're only supposed to call this when two-phase commits are supported */

There is an extra space between the "are" and "supported" in the comment.
Same issue occurs 3 times.

---

COMMENT
Line 1023
+ /*
+ * Skip if decoding of two-phase at PREPARE time is not enabled. In that
+ * case all two-phase transactions are considered filtered out and will be
+ * applied as regular transactions at COMMIT PREPARED.
+ */

Comment still is missing the word "transactions"
"Skip if decoding of two-phase at PREPARE time is not enabled."
-> "Skip if decoding of two-phase transactions at PREPARE time is not enabled.

==========
Patch v10-0001, File: src/include/replication/reorderbuffer.h
==========

COMMENT
Line 459
/* abort prepared callback signature */
typedef void (*ReorderBufferRollbackPreparedCB) (
ReorderBuffer *rb,
ReorderBufferTXN *txn,
XLogRecPtr abort_lsn);

There is no alignment consistency here for
ReorderBufferRollbackPreparedCB. Some function args are directly under
the "(" and some are on the same line. This function code is neither.

---

COMMENT
Line 638
@@ -431,6 +486,24 @@ typedef void (*ReorderBufferStreamAbortCB) (
ReorderBufferTXN *txn,
XLogRecPtr abort_lsn);

+/* prepare streamed transaction callback signature */
+typedef void (*ReorderBufferStreamPrepareCB) (
+ ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr prepare_lsn);
+
+/* prepare streamed transaction callback signature */
+typedef void (*ReorderBufferStreamCommitPreparedCB) (
+ ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn);
+
+/* prepare streamed transaction callback signature */
+typedef void (*ReorderBufferStreamRollbackPreparedCB) (
+ ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr rollback_lsn);

There is no inconsistent alignment with the arguments (compare how
other functions are aligned)

See:
- for ReorderBufferStreamCommitPreparedCB
- for ReorderBufferStreamRollbackPreparedCB
- for ReorderBufferPrepareNeedSkip
- for ReorderBufferTxnIsPrepared
- for ReorderBufferPrepare

---

COMMENT
Line 489
Line 495
Line 501
/* prepare streamed transaction callback signature */

Same comment cut/paste 3 times?
- for ReorderBufferStreamPrepareCB
- for ReorderBufferStreamCommitPreparedCB
- for ReorderBufferStreamRollbackPreparedCB

---

COMMENT
Line 457
/* abort prepared callback signature */
typedef void (*ReorderBufferRollbackPreparedCB) (
ReorderBuffer *rb,
ReorderBufferTXN *txn,
XLogRecPtr abort_lsn);

"abort" --> "rollback" in the function comment.

---

COMMENT
Line 269
/* In case of 2PC we need to pass GID to output plugin */

"2PC" --> "two-phase commit"

==========
Patch v10-0002, File: contrib/test_decoding/expected/two_phase.out (and .sql)
==========

COMMENT
General

It is a bit hard to see what are the main tests here are what are just
sub-parts of some test case.

e.g. It seems like the main tests are.

1. Test that decoding happens at PREPARE time
2. Test decoding of an aborted tx
3. Test a prepared tx which contains some DDL
4. Test decoding works while an uncommitted prepared tx with DDL exists
5. Test operations holding exclusive locks won't block decoding
6. Test savepoints and sub-transactions
7. Test "_nodecode" will defer the decoding until the commit time

Can the comments be made more obvious so it is easy to distinguish the
main tests from the steps of those tests?

---

COMMENT
Line 1
-- Test two-phased transactions, when two-phase-commit is enabled,
transactions are
-- decoded at PREPARE time rather than at COMMIT PREPARED time.

Some commas to be removed and this comment to be split into several sentences.

---

COMMENT
Line 19
-- should show nothing

Comment could be more informative. E.g. "Should show nothing because
the PREPARE has not happened yet"

---

COMMENT
Line 77

Looks like there is a missing comment about here that should say
something like "Show that the DDL does not appear in the decoding"

---

COMMENT
Line 160
-- test savepoints and sub-xacts as a result

The subsequent test is testing savepoints. But is it testing sub
transactions like the comment says?

==========
Patch v10-0002, File: contrib/test_decoding/t/001_twophase.pl
==========

COMMENT
General

I think basically there are only 2 tests in this file.
1. to check that the concurrent abort works.
2. to check that the prepared tx can span a server shutdown/restart

But the tests comments do not make this clear at all.
e.g. All the "#" comments look equally important although most of them
are just steps of each test case.
Can the comments be better to distinguish the tests versus the steps
of each test?

==========
Patch v10-0002, File: src/backend/replication/logical/decode.c
==========

COMMENT
Line 71
static void DecodeCommitPrepared(LogicalDecodingContext *ctx,
XLogRecordBuffer *buf,
xl_xact_parsed_commit *parsed, TransactionId xid);
static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_abort *parsed, TransactionId xid);
static void DecodeAbortPrepared(LogicalDecodingContext *ctx,
XLogRecordBuffer *buf,
xl_xact_parsed_abort *parsed, TransactionId xid);
static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_prepare * parsed);

The 2nd line or args are not aligned properly.
- for DecodeCommitPrepared
- for DecodeAbortPrepared
- for DecodePrepare

==========
Patch v10-0002, File: src/backend/replication/logical/reorderbuffer.c
==========

COMMENT
There are some parts of the code where in my v6 review I had a doubt
about the mutually exclusive treatment of the "streaming" flag and the
"rbtxn_prepared(txn)" state.

Basically I did not see how some parts of the code are treating NOT
streaming as implying 2PC etc because it defies my understanding that
2PC can also work in streaming mode. Perhaps the "streaming" flag has
a different meaning to how I interpret it? Or perhaps some functions
are guarding higher up and can only be called under certain
conditions?

Anyway, this confusion manifests in several parts of the code, none of
which was changed after my v6 review.

Affected code includes the following:

CASE 1
Wherever the ReorderBufferTruncateTXN(...) "prepared" flag (third
parameter) is hardwired true/false, I think there must be some
preceding Assert to guarantee the prepared state condition holds true.
There can't be any room for doubts like "but what will it do for
streamed 2PC..."
Line 1805 - ReorderBufferTruncateTXN(rb, txn, true); // if rbtxn_prepared(txn)
Line 1941 - ReorderBufferTruncateTXN(rb, txn, false); // state ??
Line 2389 - ReorderBufferTruncateTXN(rb, txn, false); // if streaming
Line 2396 - ReorderBufferTruncateTXN(rb, txn, true); // if not
streaming and if rbtxm_prepared(txn)
Line 2459 - ReorderBufferTruncateTXN(rb, txn, true); // if not streaming

~

CASE 2
Wherever the "streaming" flag is tested I don't really understand how
NOT streaming can automatically imply 2PC.
Line 2330 - if (streaming) // what about if it is streaming AND 2PC at
the same time?
Line 2387 - if (streaming) // what about if it is streaming AND 2PC at
the same time?
Line 2449 - if (streaming) // what about if it is streaming AND 2PC at
the same time?

~

Case 1 and Case 2 above overlap a fair bit. I just listed them so they
all get checked again.

Even if the code is thought to be currently OK I do still think
something should be done like:
a) add some more substantial comments to explain WHY the combination
of streaming and 2PC is not valid in the context
b) the Asserts to be strengthened to 100% guarantee that the streaming
and prepared states really are exclusive (if indeed they are). For
this point I thought the following Assert condition could be better:
Assert(streaming || rbtxn_prepared(txn));
Assert(stream_started || rbtxn_prepared(txn));
because as it is you still are left wondering if both streaming AND
rbtxn_prepared(txn) can be possible at the same time...

---

COMMENT
Line 2634
* Anyways, two-phase transactions do not contain any reorderbuffers.

"Anyways" --> "Anyway"

==========
Patch v10-0003, File: src/backend/access/transam/twophase.c
==========

COMMENT
Line 557
@@ -548,6 +548,33 @@ MarkAsPrepared(GlobalTransaction gxact, bool lock_held)
}

 /*
+ * LookupGXact
+ * Check if the prepared transaction with the given GID is around
+ */
+bool
+LookupGXact(const char *gid)
+{
+ int i;
+ bool found = false;

The variable declarations (i and found) are not aligned.

==========
Patch v10-0003, File: src/backend/replication/logical/proto.c
==========

COMMENT
Line 125
Line 205
Assert(strlen(txn->gid) > 0);

I suggested that the assertion should also check txn->gid is not NULL.
You replied "In this case txn->gid has to be non NULL".

But that is exactly what I said :-)
If it HAS to be non-NULL then why not just Assert that in code instead
of leaving the reader wondering?

"Assert(strlen(txn->gid) > 0);" --> "Assert(tdx->gid && strlen(txn->gid) > 0);"
Same occurs several times.

---

COMMENT
Line 133
Line 213
if (rbtxn_commit_prepared(txn))
flags |= LOGICALREP_IS_COMMIT_PREPARED;
else if (rbtxn_rollback_prepared(txn))
flags |= LOGICALREP_IS_ROLLBACK_PREPARED;
else
flags |= LOGICALREP_IS_PREPARE;

Previously I wrote that the use of the bit flags on assignment in the
logicalrep_write_prepare was inconsistent with the way they are
treated when they are read. Really it should be using a direct
assignment instead of bit flags.

You said this is skipped anticipating a possible refactor. But IMO
this leaves the code in a half/half state. I think it is better to fix
it properly and if refactoring happens then deal with that at the
time.

The last comment I saw from Amit said to use my 1st proposal of direct
assignment instead of bit flag assignment.

(applies to both non-stream and stream functions)
- see logicalrep_write_prepare
- see logicalrep_write_stream_prepare

==========
Patch v10-0003, File: src/backend/replication/pgoutput/pgoutput.c
==========

COMMENT
Line 429
/*
* PREPARE callback
*/
static void
pgoutput_rollback_prepared_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
XLogRecPtr prepare_lsn)
The function comment looks wrong.
Shouldn't this comment say be "ROLLBACK PREPARED callback"?

==========
Patch v10-0003, File: src/include/replication/logicalproto.h
==========

Line 115
#define PrepareFlagsAreValid(flags) \
((flags == LOGICALREP_IS_PREPARE) || \
(flags == LOGICALREP_IS_COMMIT_PREPARED) || \
(flags == LOGICALREP_IS_ROLLBACK_PREPARED))

Would be safer if all the references to flags are in parentheses
e.g. "flags" --> "(flags)"

[END]

#60Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Peter Smith (#58)

On Fri, Oct 16, 2020 at 5:21 PM Peter Smith <smithpb2250@gmail.com> wrote:

Hello Ajin,

The v9 patches provided support for two-phase transactions for NON-streaming.

Now I have added STREAM support for two-phase transactions, and bumped
all patches to version v10.

(The 0001 and 0002 patches are unchanged. Only 0003 is changed).

--

There are a few TODO/FIXME comments in the code highlighting parts
needing some attention.

There is a #define DEBUG_STREAM_2PC useful for debugging, which I can
remove later.

All the patches have some whitespaces issues when applied. We can
resolve them as we go.

Please let me know any comments/feedback.

Hi Peter,

Thanks for your patch. Some comments for your patch:

Comments:

src/backend/replication/logical/worker.c
@@ -888,6 +888,319 @@ apply_handle_prepare(StringInfo s)
+ /*
+ * FIXME - Following condition was in apply_handle_prepare_txn except
I found  it was ALWAYS IsTransactionState() == false
+ * The synchronization worker runs in single transaction. *
+ if (IsTransactionState() && !am_tablesync_worker())
+ */
+ if (!am_tablesync_worker())

Comment: I dont think a tablesync worker will use streaming, none of
the other stream APIs check this, this might not be relevant for
stream_prepare either.

+ /*
+ * ==================================================================================================
+ * The following chunk of code is largely cut/paste from the existing
apply_handle_prepare_commit_txn

Comment: Here, I think you meant apply_handle_stream_commit. Also
rather than duplicating this chunk of code, you could put it in a new
function.

+ /* open the spool file for the committed transaction */
+ changes_filename(path, MyLogicalRepWorker->subid, xid);

Comment: Here the comment should read "committed/prepared" rather than
"committed"

+ else
+ {
+ /* Process any invalidation messages that might have accumulated. */
+ AcceptInvalidationMessages();
+ maybe_reread_subscription();
+ }

Comment: This else block might not be necessary as a tablesync worker
will not initiate the streaming APIs.

+ BeginTransactionBlock();
+ CommitTransactionCommand();
+ StartTransactionCommand();

Comment: Rereading the code and the transaction state description in
src/backend/access/transam/README. I am not entirely sure if the
BeginTransactionBlock followed by CommitTransactionBlock is really
needed here.
I understand this code was copied over from apply_handle_prepare_txn,
but now looking back I'm not so sure if it is correct. The transaction
would have already begin as part of applying the changes, why begin it
again?
Maybe Amit could confirm this.

END

regards,
Ajin Cherian
Fujitsu Australia

#61Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Ajin Cherian (#60)

The PG docs for PREPARE TRANSACTION [1]https://www.postgresql.org/docs/current/sql-prepare-transaction.html don't say anything about an
empty (zero length) transaction-id.
e.g. PREPARE TRANSACTION '';
[1]: https://www.postgresql.org/docs/current/sql-prepare-transaction.html

~

Meanwhile, during testing I found the 2PC prepare hangs when an empty
id is used.

Now I am not sure does this represent some bug within the 2PC code, or
in fact should the PREPARE never have allowed an empty transaction-id
to be specified in the first place?

Thoughts?

Kind Regards
Peter Smith.
Fujitsu Australia.

#62Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#60)

On Tue, Oct 20, 2020 at 4:32 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Fri, Oct 16, 2020 at 5:21 PM Peter Smith <smithpb2250@gmail.com> wrote:

Comments:

src/backend/replication/logical/worker.c
@@ -888,6 +888,319 @@ apply_handle_prepare(StringInfo s)
+ /*
+ * FIXME - Following condition was in apply_handle_prepare_txn except
I found  it was ALWAYS IsTransactionState() == false
+ * The synchronization worker runs in single transaction. *
+ if (IsTransactionState() && !am_tablesync_worker())
+ */
+ if (!am_tablesync_worker())

Comment: I dont think a tablesync worker will use streaming, none of
the other stream APIs check this, this might not be relevant for
stream_prepare either.

Yes, I think this is right. See pgoutput_startup where we are
disabling the streaming for init phase. But it is always good to once
test this and ensure the same.

+ /*
+ * ==================================================================================================
+ * The following chunk of code is largely cut/paste from the existing
apply_handle_prepare_commit_txn

Comment: Here, I think you meant apply_handle_stream_commit. Also
rather than duplicating this chunk of code, you could put it in a new
function.

+ /* open the spool file for the committed transaction */
+ changes_filename(path, MyLogicalRepWorker->subid, xid);

Comment: Here the comment should read "committed/prepared" rather than
"committed"

+ else
+ {
+ /* Process any invalidation messages that might have accumulated. */
+ AcceptInvalidationMessages();
+ maybe_reread_subscription();
+ }

Comment: This else block might not be necessary as a tablesync worker
will not initiate the streaming APIs.

I think it is better to have an Assert here for streaming-mode?

+ BeginTransactionBlock();
+ CommitTransactionCommand();
+ StartTransactionCommand();

Comment: Rereading the code and the transaction state description in
src/backend/access/transam/README. I am not entirely sure if the
BeginTransactionBlock followed by CommitTransactionBlock is really
needed here.

Yeah, I also find this strange. I guess the patch is doing so because
it needs to call PrepareTransactionBlock later but I am not sure. How
can we call CommitTransactionCommand(), won't it commit the on-going
transaction and make it visible before even it is visible on the
publisher. I think you can verify by having a breakpoint after
CommitTransactionCommand() and see if the changes for which we are
doing prepare become visible.

I understand this code was copied over from apply_handle_prepare_txn,
but now looking back I'm not so sure if it is correct. The transaction
would have already begin as part of applying the changes, why begin it
again?
Maybe Amit could confirm this.

I hope the above suggestions will help to proceed here.

--
With Regards,
Amit Kapila.

#63Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#61)

On Wed, Oct 21, 2020 at 1:38 PM Peter Smith <smithpb2250@gmail.com> wrote:

The PG docs for PREPARE TRANSACTION [1] don't say anything about an
empty (zero length) transaction-id.
e.g. PREPARE TRANSACTION '';
[1] https://www.postgresql.org/docs/current/sql-prepare-transaction.html

~

Meanwhile, during testing I found the 2PC prepare hangs when an empty
id is used.

Can you please take an example to explain what you are trying to say?
I have tried below and doesn't face any problem:

postgres=# Begin;
BEGIN
postgres=*# select txid_current();
txid_current
--------------
534
(1 row)
postgres=*# Prepare Transaction 'foo';
PREPARE TRANSACTION
postgres=# Commit Prepared 'foo';
COMMIT PREPARED
postgres=# Begin;
BEGIN
postgres=*# Prepare Transaction 'foo';
PREPARE TRANSACTION
postgres=# Commit Prepared 'foo';
COMMIT PREPARED

--
With Regards,
Amit Kapila.

#64Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#59)

On Tue, Oct 20, 2020 at 9:46 AM Peter Smith <smithpb2250@gmail.com> wrote:

==========
Patch v10-0002, File: src/backend/replication/logical/reorderbuffer.c
==========

COMMENT
There are some parts of the code where in my v6 review I had a doubt
about the mutually exclusive treatment of the "streaming" flag and the
"rbtxn_prepared(txn)" state.

I am not sure about the exact specifics here but we can always prepare
a transaction that is streamed. I have to raise one more point in this
regard. Why do we need stream_commit_prepared_cb,
stream_rollback_prepared_cb callbacks? Do we need to do something
separate in pgoutput or otherwise for these APIs? If not, can't we use
a non-stream version of these APIs instead? There appears to be a
use-case for stream_prepare_cb which is to apply the existing changes
on subscriber and call prepare but I can't see usecase for the other
two APIs.

One minor comment:
v10-0001-Support-2PC-txn-base

1.
@@ -574,6 +655,11 @@ void ReorderBufferQueueMessage(ReorderBuffer *,
TransactionId, Snapshot snapsho
 void ReorderBufferCommit(ReorderBuffer *, TransactionId,
  XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
  TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
+void ReorderBufferFinishPrepared(ReorderBuffer *rb, TransactionId xid,
+                           XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
+                           TimestampTz commit_time,
+                           RepOriginId origin_id, XLogRecPtr origin_lsn,
+                           char *gid, bool is_commit);
 void ReorderBufferAssignChild(ReorderBuffer *, TransactionId,
TransactionId, XLogRecPtr commit_lsn);
 void ReorderBufferCommitChild(ReorderBuffer *, TransactionId, TransactionId,
  XLogRecPtr commit_lsn, XLogRecPtr end_lsn);
@@ -597,6 +683,15 @@ void
ReorderBufferXidSetCatalogChanges(ReorderBuffer *, TransactionId xid,
XLog
 bool ReorderBufferXidHasCatalogChanges(ReorderBuffer *, TransactionId xid);
 bool ReorderBufferXidHasBaseSnapshot(ReorderBuffer *, TransactionId xid);
+bool ReorderBufferPrepareNeedSkip(ReorderBuffer *rb, TransactionId xid,
+ const char *gid);
+bool ReorderBufferTxnIsPrepared(ReorderBuffer *rb, TransactionId xid,
+    const char *gid);
+void ReorderBufferPrepare(ReorderBuffer *rb, TransactionId xid,
+ XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
+ TimestampTz commit_time,
+ RepOriginId origin_id, XLogRecPtr origin_lsn,
+ char *gid);
 ReorderBufferTXN *ReorderBufferGetOldestTXN(ReorderBuf

I don't think these changes belong to this patch as the definition of
these functions is not part of this patch.

--
With Regards,
Amit Kapila.

#65Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#63)

On Wed, Oct 21, 2020 at 7:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Oct 21, 2020 at 1:38 PM Peter Smith <smithpb2250@gmail.com> wrote:

The PG docs for PREPARE TRANSACTION [1] don't say anything about an
empty (zero length) transaction-id.
e.g. PREPARE TRANSACTION '';
[1] https://www.postgresql.org/docs/current/sql-prepare-transaction.html

~

Meanwhile, during testing I found the 2PC prepare hangs when an empty
id is used.

Can you please take an example to explain what you are trying to say?

I was referring to an empty (zero length) transaction ID, not an empty
transaction.

The example was already given as PREPARE TRANSACTION '';

A longer example from my regress test is shown below. Using 2PC
pub/sub this will currently hang:

# --------------------
# Test using empty GID
# --------------------
# check that 2PC gets replicated to subscriber
$node_publisher->safe_psql('postgres',
"BEGIN;INSERT INTO tab_full VALUES (51);PREPARE TRANSACTION '';");
$node_publisher->poll_query_until('postgres', $caughtup_query)
or die "Timed out while waiting for subscriber to catch up";
# check that transaction is in prepared state on subscriber
$result =
$node_subscriber->safe_psql('postgres', "SELECT count(*) FROM
pg_prepared_xacts where gid = '';");
is($result, qq(1), 'transaction is prepared on subscriber');
# ROLLBACK
$node_publisher->safe_psql('postgres',
"ROLLBACK PREPARED '';");
# check that 2PC gets aborted on subscriber
$node_publisher->poll_query_until('postgres', $caughtup_query)
or die "Timed out while waiting for subscriber to catch up";
$result =
$node_subscriber->safe_psql('postgres', "SELECT count(*) FROM
pg_prepared_xacts where gid = '';");
is($result, qq(0), 'transaction is aborted on subscriber');

~

Is that something that should be made to work for 2PC pub/sub, or was
Postgres PREPARE TRANSACTION statement wrong to allow the user to
specify an empty transaction ID in the first place?

Kind Regards
Peter Smith.
Fujitsu Australia.

#66Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#65)

On Thu, Oct 22, 2020 at 4:58 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Wed, Oct 21, 2020 at 7:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Oct 21, 2020 at 1:38 PM Peter Smith <smithpb2250@gmail.com> wrote:

The PG docs for PREPARE TRANSACTION [1] don't say anything about an
empty (zero length) transaction-id.
e.g. PREPARE TRANSACTION '';
[1] https://www.postgresql.org/docs/current/sql-prepare-transaction.html

~

Meanwhile, during testing I found the 2PC prepare hangs when an empty
id is used.

Can you please take an example to explain what you are trying to say?

I was referring to an empty (zero length) transaction ID, not an empty
transaction.

oh, I got it confused with the system generated 32-bit TransactionId.
But now, I got what you were referring to.

The example was already given as PREPARE TRANSACTION '';

Is that something that should be made to work for 2PC pub/sub, or was
Postgres PREPARE TRANSACTION statement wrong to allow the user to
specify an empty transaction ID in the first place?

I don't see any problem with the empty transaction identifier used in
Prepare Transaction. This is just used as an identifier to uniquely
identify the transaction. If you try to use an empty string ('') more
than once for Prepare Transaction, it will give an error like below:
postgres=*# prepare transaction '';
ERROR: transaction identifier "" is already in use

So, I think this should work for pub/sub as well. Did you find out the
reason of hang?

--
With Regards,
Amit Kapila.

#67Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Peter Smith (#59)
3 attachment(s)

On Tue, Oct 20, 2020 at 3:15 PM Peter Smith <smithpb2250@gmail.com> wrote:

Hello Ajin.

I have gone through the v10 patches to verify if and how my previous
v6 review comments got addressed.

Some issues remain, and there are a few newly introduced ones.

Mostly it is all very minor stuff.

Please find my revised review comments below.

Kind Regards.
Peter Smith
Fujitsu Australia

---

V10 REVIEW COMMENTS FOLLOW

==========
Patch v10-0001, File: contrib/test_decoding/test_decoding.c
==========

COMMENT
Line 285
+ {
+ errno = 0;
+ data->check_xid_aborted = (TransactionId)
+ strtoul(strVal(elem->arg), NULL, 0);
+
+ if (!TransactionIdIsValid(data->check_xid_aborted))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("check-xid-aborted is not a valid xid: \"%s\"",
+ strVal(elem->arg))));
+ }

I think it is risky to assign strtoul directly to the
check_xid_aborted member because it makes some internal assumption
that the invalid transaction is the same as the error return from
strtoul.

Maybe better to do in 2 steps like below:

BEFORE
errno = 0;
data->check_xid_aborted = (TransactionId)strtoul(strVal(elem->arg), NULL, 0);

AFTER
long xid;
errno = 0;
xid = strtoul(strVal(elem->arg), NULL, 0);
if (xid == 0 || errno != 0)
data->check_xid_aborted = InvalidTransactionId;
else
data->check_xid_aborted =(TransactionId)xid;

---

Updated accordingly.

COMMENT
Line 430
+
+/* ABORT PREPARED callback */
+static void
+pg_decode_rollback_prepared_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
+ XLogRecPtr abort_lsn)

Fix comment "ABORT PREPARED" --> "ROLLBACK PREPARED"

Updated accordingly.

==========
Patch v10-0001, File: doc/src/sgml/logicaldecoding.sgml
==========

COMMENT
Section 48.6.1
Says:
An output plugin may also define functions to support streaming of
large, in-progress transactions. The stream_start_cb, stream_stop_cb,
stream_abort_cb, stream_commit_cb and stream_change_cb are required,
while stream_message_cb, stream_prepare_cb, stream_commit_prepared_cb,
stream_rollback_prepared_cb and stream_truncate_cb are optional.

An output plugin may also define functions to support two-phase
commits, which are decoded on PREPARE TRANSACTION. The prepare_cb,
commit_prepared_cb and rollback_prepared_cb callbacks are required,
while filter_prepare_cb is optional.

-

But is that correct? It seems strange/inconsistent to say that the 2PC
callbacks are mandatory for the non-streaming, but that they are
optional for streaming.

Updated making all the 2PC callbacks mandatory.

---

COMMENT
48.6.4.5 "Transaction Prepare Callback"
48.6.4.6 "Transaction Commit Prepared Callback"
48.6.4.7 "Transaction Rollback Prepared Callback"

There seems some confusion about what is optional and what is
mandatory. e.g. Why are the non-stream 2PC callbacks mandatory but the
stream 2PC callbacks are not? And also there is some inconsistency
with what is said in the paragraph at the top of the page versus what
each of the callback sections says wrt optional/mandatory.

The sub-sections 49.6.4.5, 49.6.4.6, 49.6.4.7 say those callbacks are
optional which IIUC Amit said is incorrect. This is similar to the
previous review comment

---

Updated making all the 2PC callbacks mandatory.

COMMENT
Section 48.6.4.7 "Transaction Rollback Prepared Callback"

parameter "abort_lsn" probably should be "rollback_lsn"

---

COMMENT
Section 49.6.4.18. "Stream Rollback Prepared Callback"
Says:
The stream_rollback_prepared_cb callback is called to abort a
previously streamed transaction as part of a two-phase commit.

maybe should say "is called to rollback"

==========
Patch v10-0001, File: src/backend/replication/logical/logical.c
==========

COMMENT
Line 252
Says: We however enable two phase logical...

"two phase" --> "two-phase"

--

COMMENT
Line 885
Line 923
Says: If the plugin support 2 phase commits...

"support 2 phase" --> "supports two-phase" in the comment. Same issue
occurs twice.

---

COMMENT
Line 830
Line 868
Line 906
Says:
/* We're only supposed to call this when two-phase commits are supported */

There is an extra space between the "are" and "supported" in the comment.
Same issue occurs 3 times.

---

COMMENT
Line 1023
+ /*
+ * Skip if decoding of two-phase at PREPARE time is not enabled. In that
+ * case all two-phase transactions are considered filtered out and will be
+ * applied as regular transactions at COMMIT PREPARED.
+ */

Comment still is missing the word "transactions"
"Skip if decoding of two-phase at PREPARE time is not enabled."
-> "Skip if decoding of two-phase transactions at PREPARE time is not enabled.

Updated accordingly.

==========
Patch v10-0001, File: src/include/replication/reorderbuffer.h
==========

COMMENT
Line 459
/* abort prepared callback signature */
typedef void (*ReorderBufferRollbackPreparedCB) (
ReorderBuffer *rb,
ReorderBufferTXN *txn,
XLogRecPtr abort_lsn);

There is no alignment consistency here for
ReorderBufferRollbackPreparedCB. Some function args are directly under
the "(" and some are on the same line. This function code is neither.

---

COMMENT
Line 638
@@ -431,6 +486,24 @@ typedef void (*ReorderBufferStreamAbortCB) (
ReorderBufferTXN *txn,
XLogRecPtr abort_lsn);

+/* prepare streamed transaction callback signature */
+typedef void (*ReorderBufferStreamPrepareCB) (
+ ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr prepare_lsn);
+
+/* prepare streamed transaction callback signature */
+typedef void (*ReorderBufferStreamCommitPreparedCB) (
+ ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn);
+
+/* prepare streamed transaction callback signature */
+typedef void (*ReorderBufferStreamRollbackPreparedCB) (
+ ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr rollback_lsn);

There is no inconsistent alignment with the arguments (compare how
other functions are aligned)

See:
- for ReorderBufferStreamCommitPreparedCB
- for ReorderBufferStreamRollbackPreparedCB
- for ReorderBufferPrepareNeedSkip
- for ReorderBufferTxnIsPrepared
- for ReorderBufferPrepare

---

COMMENT
Line 489
Line 495
Line 501
/* prepare streamed transaction callback signature */

Same comment cut/paste 3 times?
- for ReorderBufferStreamPrepareCB
- for ReorderBufferStreamCommitPreparedCB
- for ReorderBufferStreamRollbackPreparedCB

---

COMMENT
Line 457
/* abort prepared callback signature */
typedef void (*ReorderBufferRollbackPreparedCB) (
ReorderBuffer *rb,
ReorderBufferTXN *txn,
XLogRecPtr abort_lsn);

"abort" --> "rollback" in the function comment.

---

COMMENT
Line 269
/* In case of 2PC we need to pass GID to output plugin */

"2PC" --> "two-phase commit"

Updated accordingly.

==========
Patch v10-0002, File: contrib/test_decoding/expected/two_phase.out (and .sql)
==========

COMMENT
General

It is a bit hard to see what are the main tests here are what are just
sub-parts of some test case.

e.g. It seems like the main tests are.

1. Test that decoding happens at PREPARE time
2. Test decoding of an aborted tx
3. Test a prepared tx which contains some DDL
4. Test decoding works while an uncommitted prepared tx with DDL exists
5. Test operations holding exclusive locks won't block decoding
6. Test savepoints and sub-transactions
7. Test "_nodecode" will defer the decoding until the commit time

Can the comments be made more obvious so it is easy to distinguish the
main tests from the steps of those tests?

---

COMMENT
Line 1
-- Test two-phased transactions, when two-phase-commit is enabled,
transactions are
-- decoded at PREPARE time rather than at COMMIT PREPARED time.

Some commas to be removed and this comment to be split into several sentences.

---

COMMENT
Line 19
-- should show nothing

Comment could be more informative. E.g. "Should show nothing because
the PREPARE has not happened yet"

---

COMMENT
Line 77

Looks like there is a missing comment about here that should say
something like "Show that the DDL does not appear in the decoding"

---

COMMENT
Line 160
-- test savepoints and sub-xacts as a result

The subsequent test is testing savepoints. But is it testing sub
transactions like the comment says?

Updated accordingly.

==========
Patch v10-0002, File: contrib/test_decoding/t/001_twophase.pl
==========

COMMENT
General

I think basically there are only 2 tests in this file.
1. to check that the concurrent abort works.
2. to check that the prepared tx can span a server shutdown/restart

But the tests comments do not make this clear at all.
e.g. All the "#" comments look equally important although most of them
are just steps of each test case.
Can the comments be better to distinguish the tests versus the steps
of each test?

Updated accordingly.

==========
Patch v10-0002, File: src/backend/replication/logical/decode.c
==========

COMMENT
Line 71
static void DecodeCommitPrepared(LogicalDecodingContext *ctx,
XLogRecordBuffer *buf,
xl_xact_parsed_commit *parsed, TransactionId xid);
static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_abort *parsed, TransactionId xid);
static void DecodeAbortPrepared(LogicalDecodingContext *ctx,
XLogRecordBuffer *buf,
xl_xact_parsed_abort *parsed, TransactionId xid);
static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_prepare * parsed);

The 2nd line or args are not aligned properly.
- for DecodeCommitPrepared
- for DecodeAbortPrepared
- for DecodePrepare

Updated accordingly.

==========
Patch v10-0002, File: src/backend/replication/logical/reorderbuffer.c
==========

COMMENT
There are some parts of the code where in my v6 review I had a doubt
about the mutually exclusive treatment of the "streaming" flag and the
"rbtxn_prepared(txn)" state.

Basically I did not see how some parts of the code are treating NOT
streaming as implying 2PC etc because it defies my understanding that
2PC can also work in streaming mode. Perhaps the "streaming" flag has
a different meaning to how I interpret it? Or perhaps some functions
are guarding higher up and can only be called under certain
conditions?

Anyway, this confusion manifests in several parts of the code, none of
which was changed after my v6 review.

Affected code includes the following:

CASE 1
Wherever the ReorderBufferTruncateTXN(...) "prepared" flag (third
parameter) is hardwired true/false, I think there must be some
preceding Assert to guarantee the prepared state condition holds true.
There can't be any room for doubts like "but what will it do for
streamed 2PC..."
Line 1805 - ReorderBufferTruncateTXN(rb, txn, true); // if rbtxn_prepared(txn)
Line 1941 - ReorderBufferTruncateTXN(rb, txn, false); // state ??
Line 2389 - ReorderBufferTruncateTXN(rb, txn, false); // if streaming
Line 2396 - ReorderBufferTruncateTXN(rb, txn, true); // if not
streaming and if rbtxm_prepared(txn)
Line 2459 - ReorderBufferTruncateTXN(rb, txn, true); // if not streaming

~

CASE 2
Wherever the "streaming" flag is tested I don't really understand how
NOT streaming can automatically imply 2PC.
Line 2330 - if (streaming) // what about if it is streaming AND 2PC at
the same time?
Line 2387 - if (streaming) // what about if it is streaming AND 2PC at
the same time?
Line 2449 - if (streaming) // what about if it is streaming AND 2PC at
the same time?

~

Case 1 and Case 2 above overlap a fair bit. I just listed them so they
all get checked again.

Even if the code is thought to be currently OK I do still think
something should be done like:
a) add some more substantial comments to explain WHY the combination
of streaming and 2PC is not valid in the context
b) the Asserts to be strengthened to 100% guarantee that the streaming
and prepared states really are exclusive (if indeed they are). For
this point I thought the following Assert condition could be better:
Assert(streaming || rbtxn_prepared(txn));
Assert(stream_started || rbtxn_prepared(txn));
because as it is you still are left wondering if both streaming AND
rbtxn_prepared(txn) can be possible at the same time...

---

Updated with more comments and a new Assert.

COMMENT
Line 2634
* Anyways, two-phase transactions do not contain any reorderbuffers.

"Anyways" --> "Anyway"

Updated.

==========
Patch v10-0003, File: src/backend/access/transam/twophase.c
==========

COMMENT
Line 557
@@ -548,6 +548,33 @@ MarkAsPrepared(GlobalTransaction gxact, bool lock_held)
}

/*
+ * LookupGXact
+ * Check if the prepared transaction with the given GID is around
+ */
+bool
+LookupGXact(const char *gid)
+{
+ int i;
+ bool found = false;

The variable declarations (i and found) are not aligned.

Updated.

==========
Patch v10-0003, File: src/backend/replication/logical/proto.c
==========

COMMENT
Line 125
Line 205
Assert(strlen(txn->gid) > 0);

I suggested that the assertion should also check txn->gid is not NULL.
You replied "In this case txn->gid has to be non NULL".

But that is exactly what I said :-)
If it HAS to be non-NULL then why not just Assert that in code instead
of leaving the reader wondering?

"Assert(strlen(txn->gid) > 0);" --> "Assert(tdx->gid && strlen(txn->gid) > 0);"
Same occurs several times.

---

Updated checking that gid is non-NULL as zero strlen is actually a valid case.

COMMENT
Line 133
Line 213
if (rbtxn_commit_prepared(txn))
flags |= LOGICALREP_IS_COMMIT_PREPARED;
else if (rbtxn_rollback_prepared(txn))
flags |= LOGICALREP_IS_ROLLBACK_PREPARED;
else
flags |= LOGICALREP_IS_PREPARE;

Previously I wrote that the use of the bit flags on assignment in the
logicalrep_write_prepare was inconsistent with the way they are
treated when they are read. Really it should be using a direct
assignment instead of bit flags.

You said this is skipped anticipating a possible refactor. But IMO
this leaves the code in a half/half state. I think it is better to fix
it properly and if refactoring happens then deal with that at the
time.

The last comment I saw from Amit said to use my 1st proposal of direct
assignment instead of bit flag assignment.

(applies to both non-stream and stream functions)
- see logicalrep_write_prepare
- see logicalrep_write_stream_prepare

Updated accordingly.

==========
Patch v10-0003, File: src/backend/replication/pgoutput/pgoutput.c
==========

COMMENT
Line 429
/*
* PREPARE callback
*/
static void
pgoutput_rollback_prepared_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
XLogRecPtr prepare_lsn)
The function comment looks wrong.
Shouldn't this comment say be "ROLLBACK PREPARED callback"?

==========
Patch v10-0003, File: src/include/replication/logicalproto.h
==========

Line 115
#define PrepareFlagsAreValid(flags) \
((flags == LOGICALREP_IS_PREPARE) || \
(flags == LOGICALREP_IS_COMMIT_PREPARED) || \
(flags == LOGICALREP_IS_ROLLBACK_PREPARED))

Would be safer if all the references to flags are in parentheses
e.g. "flags" --> "(flags)"

Updated accordingly.

Amit,
I have also modified the stream callback APIs to not include
stream_commit_prpeared and stream_rollback_prepared, instead use the
non-stream APIs for the same functionality.
I have also updated the test_decoding and pgoutput plugins accordingly.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v11-0003-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v11-0003-Support-2PC-txn-pgoutput.patch
v11-0001-Support-2PC-txn-base.patchapplication/octet-stream; name=v11-0001-Support-2PC-txn-base.patch
v11-0002-Support-2PC-txn-backend-and-tests.patchapplication/octet-stream; name=v11-0002-Support-2PC-txn-backend-and-tests.patch
#68Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#67)

On Fri, Oct 23, 2020 at 3:41 PM Ajin Cherian <itsajin@gmail.com> wrote:

Amit,
I have also modified the stream callback APIs to not include
stream_commit_prpeared and stream_rollback_prepared, instead use the
non-stream APIs for the same functionality.
I have also updated the test_decoding and pgoutput plugins accordingly.

Thanks, I think you forgot to address one of my comments in the
previous email[1]/messages/by-id/CAA4eK1JzRvUX2XLEKo2f74Vjecnt6wq-kkk1OiyMJ5XjJN+GvQ@mail.gmail.com (See "One minor comment .."). You have not even
responded to it.

[1]: /messages/by-id/CAA4eK1JzRvUX2XLEKo2f74Vjecnt6wq-kkk1OiyMJ5XjJN+GvQ@mail.gmail.com

--
With Regards,
Amit Kapila.

#69Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Ajin Cherian (#60)
3 attachment(s)

Hi Ajin.

I've addressed your review comments (details below) and bumped the
patch set to v12 attached.

I also added more test cases.

On Tue, Oct 20, 2020 at 10:02 PM Ajin Cherian <itsajin@gmail.com> wrote:

Thanks for your patch. Some comments for your patch:

Comments:

src/backend/replication/logical/worker.c
@@ -888,6 +888,319 @@ apply_handle_prepare(StringInfo s)
+ /*
+ * FIXME - Following condition was in apply_handle_prepare_txn except
I found  it was ALWAYS IsTransactionState() == false
+ * The synchronization worker runs in single transaction. *
+ if (IsTransactionState() && !am_tablesync_worker())
+ */
+ if (!am_tablesync_worker())

Comment: I dont think a tablesync worker will use streaming, none of
the other stream APIs check this, this might not be relevant for
stream_prepare either.

Updated

+ /*
+ * ==================================================================================================
+ * The following chunk of code is largely cut/paste from the existing
apply_handle_prepare_commit_txn

Comment: Here, I think you meant apply_handle_stream_commit.

Updated.

Also
rather than duplicating this chunk of code, you could put it in a new
function.

Code is refactored to share a common function for the spool file processing.

+ else
+ {
+ /* Process any invalidation messages that might have accumulated. */
+ AcceptInvalidationMessages();
+ maybe_reread_subscription();
+ }

Comment: This else block might not be necessary as a tablesync worker
will not initiate the streaming APIs.

Updated

~

Kind Regards,
Peter Smith
Fujitsu Australia

Attachments:

v12-0001-Support-2PC-txn-base.patchapplication/octet-stream; name=v12-0001-Support-2PC-txn-base.patch
v12-0002-Support-2PC-txn-backend-and-tests.patchapplication/octet-stream; name=v12-0002-Support-2PC-txn-backend-and-tests.patch
v12-0003-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v12-0003-Support-2PC-txn-pgoutput.patch
#70Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Ajin Cherian (#67)

Hi Ajin.

I checked the to see how my previous review comments (of v10) were
addressed by the latest patches (currently v12)

There are a couple of remaining items.

---

====================
v12-0001. File: doc/src/sgml/logicaldecoding.sgml
====================

COMMENT
Section 49.6.1
Says:
An output plugin may also define functions to support streaming of
large, in-progress transactions. The stream_start_cb, stream_stop_cb,
stream_abort_cb, stream_commit_cb, stream_change_cb, and
stream_prepare_cb are required, while stream_message_cb and
stream_truncate_cb are optional.

An output plugin may also define functions to support two-phase
commits, which are decoded on PREPARE TRANSACTION. The prepare_cb,
commit_prepared_cb and rollback_prepared_cb callbacks are required,
while filter_prepare_cb is optional.
~
I was not sure how the paragraphs are organised. e.g. 1st seems to be
about streams and 2nd seems to be about two-phase commit. But they are
not mutually exclusive, so I guess I thought it was odd that
stream_prepare_cb was not mentioned in the 2nd paragraph.

Or maybe it is OK as-is?

====================
v12-0002. File: contrib/test_decoding/expected/two_phase.out
====================

COMMENT
Line 26
PREPARE TRANSACTION 'test_prepared#1';
--
SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,
NULL, 'two-phase-commit', '1', 'include-xids', '0',
'skip-empty-xacts', '1');
~
Seems like a missing comment to explain the expectation of that select.

---

COMMENT
Line 80
-- The insert should show the newly altered column.
~
Do you also need to mention something about the DDL not being present
in the decoding?

====================
v12-0002. File: src/backend/replication/logical/reorderbuffer.c
====================

COMMENT
Line 1807
/* Here we are streaming and part of the PREPARE of a two-phase commit
* The full cleanup will happen as part of the COMMIT PREPAREDs, so now
* just truncate txn by removing changes and tuple_cids
*/
~
Something seems strange about the first sentence of that comment

---

COMMENT
Line 1944
/* Discard the changes that we just streamed.
* This can only be called if streaming and not part of a PREPARE in
* a two-phase commit, so set prepared flag as false.
*/
~
I thought since this comment that is asserting various things, that
should also actually be written as code Assert.

---

COMMENT
Line 2401
/*
* We are here due to one of the 3 scenarios:
* 1. As part of streaming in-progress transactions
* 2. Prepare of a two-phase commit
* 3. Commit of a transaction.
*
* If we are streaming the in-progress transaction then discard the
* changes that we just streamed, and mark the transactions as
* streamed (if they contained changes), set prepared flag as false.
* If part of a prepare of a two-phase commit set the prepared flag
* as true so that we can discard changes and cleanup tuplecids.
* Otherwise, remove all the
* changes and deallocate the ReorderBufferTXN.
*/
~
The above comment is beyond my understanding. Anything you could do to
simplify it would be good.

For example, when viewing this function in isolation I have never
understood why the streaming flag and rbtxn_prepared(txn) flag are not
possible to be set at the same time?

Perhaps the code is relying on just internal knowledge of how this
helper function gets called? And if it is just that, then IMO there
really should be some Asserts in the code to give more assurance about
that. (Or maybe use completely different flags to represent those 3
scenarios instead of bending the meanings of the existing flags)

====================
v12-0003. File: src/backend/access/transam/twophase.c
====================

COMMENT
Line 557
@@ -548,6 +548,33 @@ MarkAsPrepared(GlobalTransaction gxact, bool lock_held)
}

 /*
+ * LookupGXact
+ * Check if the prepared transaction with the given GID is around
+ */
+bool
+LookupGXact(const char *gid)
+{
+ int i;
+ bool found = false;
~
Alignment of the variable declarations in LookupGXact function

---

Kind Regards,
Peter Smith.
Fujitsu Australia

#71Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Peter Smith (#70)
3 attachment(s)

On Mon, Oct 26, 2020 at 6:49 PM Peter Smith <smithpb2250@gmail.com> wrote:

Hi Ajin.

I checked the to see how my previous review comments (of v10) were
addressed by the latest patches (currently v12)

There are a couple of remaining items.

---

====================
v12-0001. File: doc/src/sgml/logicaldecoding.sgml
====================

COMMENT
Section 49.6.1
Says:
An output plugin may also define functions to support streaming of
large, in-progress transactions. The stream_start_cb, stream_stop_cb,
stream_abort_cb, stream_commit_cb, stream_change_cb, and
stream_prepare_cb are required, while stream_message_cb and
stream_truncate_cb are optional.

An output plugin may also define functions to support two-phase
commits, which are decoded on PREPARE TRANSACTION. The prepare_cb,
commit_prepared_cb and rollback_prepared_cb callbacks are required,
while filter_prepare_cb is optional.
~
I was not sure how the paragraphs are organised. e.g. 1st seems to be
about streams and 2nd seems to be about two-phase commit. But they are
not mutually exclusive, so I guess I thought it was odd that
stream_prepare_cb was not mentioned in the 2nd paragraph.

Or maybe it is OK as-is?

I've added stream_prepare_cb to the 2nd paragraph as well.

====================
v12-0002. File: contrib/test_decoding/expected/two_phase.out
====================

COMMENT
Line 26
PREPARE TRANSACTION 'test_prepared#1';
--
SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,
NULL, 'two-phase-commit', '1', 'include-xids', '0',
'skip-empty-xacts', '1');
~
Seems like a missing comment to explain the expectation of that select.

---

Updated.

COMMENT
Line 80
-- The insert should show the newly altered column.
~
Do you also need to mention something about the DDL not being present
in the decoding?

Updated.

====================
v12-0002. File: src/backend/replication/logical/reorderbuffer.c
====================

COMMENT
Line 1807
/* Here we are streaming and part of the PREPARE of a two-phase commit
* The full cleanup will happen as part of the COMMIT PREPAREDs, so now
* just truncate txn by removing changes and tuple_cids
*/
~
Something seems strange about the first sentence of that comment

---

COMMENT
Line 1944
/* Discard the changes that we just streamed.
* This can only be called if streaming and not part of a PREPARE in
* a two-phase commit, so set prepared flag as false.
*/
~
I thought since this comment that is asserting various things, that
should also actually be written as code Assert.

---

Added an assert.

COMMENT
Line 2401
/*
* We are here due to one of the 3 scenarios:
* 1. As part of streaming in-progress transactions
* 2. Prepare of a two-phase commit
* 3. Commit of a transaction.
*
* If we are streaming the in-progress transaction then discard the
* changes that we just streamed, and mark the transactions as
* streamed (if they contained changes), set prepared flag as false.
* If part of a prepare of a two-phase commit set the prepared flag
* as true so that we can discard changes and cleanup tuplecids.
* Otherwise, remove all the
* changes and deallocate the ReorderBufferTXN.
*/
~
The above comment is beyond my understanding. Anything you could do to
simplify it would be good.

For example, when viewing this function in isolation I have never
understood why the streaming flag and rbtxn_prepared(txn) flag are not
possible to be set at the same time?

Perhaps the code is relying on just internal knowledge of how this
helper function gets called? And if it is just that, then IMO there
really should be some Asserts in the code to give more assurance about
that. (Or maybe use completely different flags to represent those 3
scenarios instead of bending the meanings of the existing flags)

Left this for now, probably re-look at this at a later review.
But just to explain; this function is what does the main decoding of
changes of a transaction.
At what point this decoding happens is what this feature and the
streaming in-progress feature is about. As of PG13, this decoding only
happens at commit time. With the streaming of in-progress txn feature,
this began to happen (if streaming enabled) at the time when the
memory limit for decoding transactions was crossed. This 2PC feature
is supporting decoding at the time of a PREPARE transaction.
Now, if streaming is enabled and streaming has started as a result of
crossing the memory threshold, then there is no need to
again begin streaming at a PREPARE transaction as the transaction that
is being prepared has already been streamed. Which is why this
function will not be called when a streaming transaction is prepared
as part of a two-phase commit.

====================
v12-0003. File: src/backend/access/transam/twophase.c
====================

COMMENT
Line 557
@@ -548,6 +548,33 @@ MarkAsPrepared(GlobalTransaction gxact, bool lock_held)
}

/*
+ * LookupGXact
+ * Check if the prepared transaction with the given GID is around
+ */
+bool
+LookupGXact(const char *gid)
+{
+ int i;
+ bool found = false;
~
Alignment of the variable declarations in LookupGXact function

---

Updated.

Amit, I have also updated your comment about removing function
declaration from commit 1 and I've added it to commit 2. Also removed
whitespace errors.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v13-0001-Support-2PC-txn-base.patchapplication/octet-stream; name=v13-0001-Support-2PC-txn-base.patch
v13-0002-Support-2PC-txn-backend-and-tests.patchapplication/octet-stream; name=v13-0002-Support-2PC-txn-backend-and-tests.patch
v13-0003-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v13-0003-Support-2PC-txn-pgoutput.patch
#72Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Ajin Cherian (#71)
2 attachment(s)

FYI - Please find attached code coverage reports which I generated
(based on the v12 patches) after running the following tests:

1. cd contrib/test_decoding; make check

2. cd src/test/subscriber; make check

Kind Regards,
Peter Smith.
Fujitsu Australia

Show quoted text

On Tue, Oct 27, 2020 at 8:55 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Mon, Oct 26, 2020 at 6:49 PM Peter Smith <smithpb2250@gmail.com> wrote:

Hi Ajin.

I checked the to see how my previous review comments (of v10) were
addressed by the latest patches (currently v12)

There are a couple of remaining items.

---

====================
v12-0001. File: doc/src/sgml/logicaldecoding.sgml
====================

COMMENT
Section 49.6.1
Says:
An output plugin may also define functions to support streaming of
large, in-progress transactions. The stream_start_cb, stream_stop_cb,
stream_abort_cb, stream_commit_cb, stream_change_cb, and
stream_prepare_cb are required, while stream_message_cb and
stream_truncate_cb are optional.

An output plugin may also define functions to support two-phase
commits, which are decoded on PREPARE TRANSACTION. The prepare_cb,
commit_prepared_cb and rollback_prepared_cb callbacks are required,
while filter_prepare_cb is optional.
~
I was not sure how the paragraphs are organised. e.g. 1st seems to be
about streams and 2nd seems to be about two-phase commit. But they are
not mutually exclusive, so I guess I thought it was odd that
stream_prepare_cb was not mentioned in the 2nd paragraph.

Or maybe it is OK as-is?

I've added stream_prepare_cb to the 2nd paragraph as well.

====================
v12-0002. File: contrib/test_decoding/expected/two_phase.out
====================

COMMENT
Line 26
PREPARE TRANSACTION 'test_prepared#1';
--
SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,
NULL, 'two-phase-commit', '1', 'include-xids', '0',
'skip-empty-xacts', '1');
~
Seems like a missing comment to explain the expectation of that select.

---

Updated.

COMMENT
Line 80
-- The insert should show the newly altered column.
~
Do you also need to mention something about the DDL not being present
in the decoding?

Updated.

====================
v12-0002. File: src/backend/replication/logical/reorderbuffer.c
====================

COMMENT
Line 1807
/* Here we are streaming and part of the PREPARE of a two-phase commit
* The full cleanup will happen as part of the COMMIT PREPAREDs, so now
* just truncate txn by removing changes and tuple_cids
*/
~
Something seems strange about the first sentence of that comment

---

COMMENT
Line 1944
/* Discard the changes that we just streamed.
* This can only be called if streaming and not part of a PREPARE in
* a two-phase commit, so set prepared flag as false.
*/
~
I thought since this comment that is asserting various things, that
should also actually be written as code Assert.

---

Added an assert.

COMMENT
Line 2401
/*
* We are here due to one of the 3 scenarios:
* 1. As part of streaming in-progress transactions
* 2. Prepare of a two-phase commit
* 3. Commit of a transaction.
*
* If we are streaming the in-progress transaction then discard the
* changes that we just streamed, and mark the transactions as
* streamed (if they contained changes), set prepared flag as false.
* If part of a prepare of a two-phase commit set the prepared flag
* as true so that we can discard changes and cleanup tuplecids.
* Otherwise, remove all the
* changes and deallocate the ReorderBufferTXN.
*/
~
The above comment is beyond my understanding. Anything you could do to
simplify it would be good.

For example, when viewing this function in isolation I have never
understood why the streaming flag and rbtxn_prepared(txn) flag are not
possible to be set at the same time?

Perhaps the code is relying on just internal knowledge of how this
helper function gets called? And if it is just that, then IMO there
really should be some Asserts in the code to give more assurance about
that. (Or maybe use completely different flags to represent those 3
scenarios instead of bending the meanings of the existing flags)

Left this for now, probably re-look at this at a later review.
But just to explain; this function is what does the main decoding of
changes of a transaction.
At what point this decoding happens is what this feature and the
streaming in-progress feature is about. As of PG13, this decoding only
happens at commit time. With the streaming of in-progress txn feature,
this began to happen (if streaming enabled) at the time when the
memory limit for decoding transactions was crossed. This 2PC feature
is supporting decoding at the time of a PREPARE transaction.
Now, if streaming is enabled and streaming has started as a result of
crossing the memory threshold, then there is no need to
again begin streaming at a PREPARE transaction as the transaction that
is being prepared has already been streamed. Which is why this
function will not be called when a streaming transaction is prepared
as part of a two-phase commit.

====================
v12-0003. File: src/backend/access/transam/twophase.c
====================

COMMENT
Line 557
@@ -548,6 +548,33 @@ MarkAsPrepared(GlobalTransaction gxact, bool lock_held)
}

/*
+ * LookupGXact
+ * Check if the prepared transaction with the given GID is around
+ */
+bool
+LookupGXact(const char *gid)
+{
+ int i;
+ bool found = false;
~
Alignment of the variable declarations in LookupGXact function

---

Updated.

Amit, I have also updated your comment about removing function
declaration from commit 1 and I've added it to commit 2. Also removed
whitespace errors.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

coverage_test_decoding.tar.gzapplication/gzip; name=coverage_test_decoding.tar.gz
coverage_replication.tar.gzapplication/gzip; name=coverage_replication.tar.gz
#73Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Ajin Cherian (#71)

Hi Ajin.

I have re-checked the v13 patches for how my remaining review comments
have been addressed.

On Tue, Oct 27, 2020 at 8:55 PM Ajin Cherian <itsajin@gmail.com> wrote:

====================
v12-0002. File: src/backend/replication/logical/reorderbuffer.c
====================

COMMENT
Line 2401
/*
* We are here due to one of the 3 scenarios:
* 1. As part of streaming in-progress transactions
* 2. Prepare of a two-phase commit
* 3. Commit of a transaction.
*
* If we are streaming the in-progress transaction then discard the
* changes that we just streamed, and mark the transactions as
* streamed (if they contained changes), set prepared flag as false.
* If part of a prepare of a two-phase commit set the prepared flag
* as true so that we can discard changes and cleanup tuplecids.
* Otherwise, remove all the
* changes and deallocate the ReorderBufferTXN.
*/
~
The above comment is beyond my understanding. Anything you could do to
simplify it would be good.

For example, when viewing this function in isolation I have never
understood why the streaming flag and rbtxn_prepared(txn) flag are not
possible to be set at the same time?

Perhaps the code is relying on just internal knowledge of how this
helper function gets called? And if it is just that, then IMO there
really should be some Asserts in the code to give more assurance about
that. (Or maybe use completely different flags to represent those 3
scenarios instead of bending the meanings of the existing flags)

Left this for now, probably re-look at this at a later review.
But just to explain; this function is what does the main decoding of
changes of a transaction.
At what point this decoding happens is what this feature and the
streaming in-progress feature is about. As of PG13, this decoding only
happens at commit time. With the streaming of in-progress txn feature,
this began to happen (if streaming enabled) at the time when the
memory limit for decoding transactions was crossed. This 2PC feature
is supporting decoding at the time of a PREPARE transaction.
Now, if streaming is enabled and streaming has started as a result of
crossing the memory threshold, then there is no need to
again begin streaming at a PREPARE transaction as the transaction that
is being prepared has already been streamed. Which is why this
function will not be called when a streaming transaction is prepared
as part of a two-phase commit.

AFAIK the last remaining issue now is only about the complexity of the
aforementioned code/comment. If you want to defer changing that until
we can come up with something better, then that is OK by me.

Apart from that I have no other pending review comments at this time.

Kind Regards,
Peter Smith.
Fujitsu Australia

#74Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#73)

Hi Ajin.

Looking at v13 patches again I found a couple more review comments:

===

(1) COMMENT
File: src/backend/replication/logical/proto.c
Function: logicalrep_write_prepare
+ if (rbtxn_commit_prepared(txn))
+ flags = LOGICALREP_IS_COMMIT_PREPARED;
+ else if (rbtxn_rollback_prepared(txn))
+ flags = LOGICALREP_IS_ROLLBACK_PREPARED;
+ else
+ flags = LOGICALREP_IS_PREPARE;
+
+ /* Make sure exactly one of the expected flags is set. */
+ if (!PrepareFlagsAreValid(flags))
+ elog(ERROR, "unrecognized flags %u in prepare message", flags);

Since those flags are directly assigned, I think the subsequent if
(!PrepareFlagsAreValid(flags)) check is redundant.

===

(2) COMMENT
File: src/backend/replication/logical/proto.c
Function: logicalrep_write_stream_prepare
+/*
+ * Write STREAM PREPARE to the output stream.
+ * (For stream PREPARE, stream COMMIT PREPARED, stream ROLLBACK PREPARED)
+ */

I think the function comment is outdated because IIUC the stream
COMMIT PREPARED and stream ROLLBACK PREPARED are not being handled by
the function logicalrep_write_prepare. SInce this approach seems
counter-intuitive there needs to be an improved function comment to
explain what is going on.

===

(3) COMMENT
File: src/backend/replication/logical/proto.c
Function: logicalrep_read_stream_prepare
+/*
+ * Read STREAM PREPARE from the output stream.
+ * (For stream PREPARE, stream COMMIT PREPARED, stream ROLLBACK PREPARED)
+ */

This is the same as the previous review comment. The function comment
needs to explain the new handling for stream COMMIT PREPARED and
stream ROLLBACK PREPARED.

===

(4) COMMENT
File: src/backend/replication/logical/proto.c
Function: logicalrep_read_stream_prepare
+TransactionId
+logicalrep_read_stream_prepare(StringInfo in, LogicalRepPrepareData
*prepare_data)
+{
+ TransactionId xid;
+ uint8 flags;
+
+ xid = pq_getmsgint(in, 4);
+
+ /* read flags */
+ flags = pq_getmsgbyte(in);
+
+ if (!PrepareFlagsAreValid(flags))
+ elog(ERROR, "unrecognized flags %u in prepare message", flags);

I think the logicalrep_write_stream_prepare now can only assign the
flags = LOGICALREP_IS_PREPARE. So that means the check here for bad
flags should be changed to match.
BEFORE: if (!PrepareFlagsAreValid(flags))
AFTER: if (flags != LOGICALREP_IS_PREPARE)

===

(5) COMMENT
General
Since the COMMENTs (2), (3) and (4) are all caused by the refactoring
that was done for removal of the commit/rollback stream callbacks. I
do wonder if it might have worked out better just to leave the
logicalrep_read/write_stream_prepared as it was instead of mixing up
stream/no-stream handling. A check for stream/no-stream could possibly
have been made higher up.

For example:
static void
pgoutput_commit_prepared_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
XLogRecPtr prepare_lsn)
{
OutputPluginUpdateProgress(ctx);

OutputPluginPrepareWrite(ctx, true);
if (ctx->streaming)
logicalrep_write_stream_prepare(ctx->out, txn, prepare_lsn);
else
logicalrep_write_prepare(ctx->out, txn, prepare_lsn);
OutputPluginWrite(ctx, true);
}

===

Kind Regards,
Peter Smith.
Fujitsu Australia

#75Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#74)
1 attachment(s)

FYI - I have cross-checked all the v12 patch code changes against the
v12 code coverage resulting from running the patch tests

Those v12 code coverage results were posted in this thread previously [1]/messages/by-id/CAHut+Pt6zB-YffCrMo7+ZOKn7C2yXkNYnuQTdbStEJJJXZZXaw@mail.gmail.com.

The purpose of this study was to identify if / where there are any
gaps in the testing of this patch - e.g is there some code not
currently getting executed?

I found in general there seems quite high coverage of the normal (not
error) code path,but there are a couple of current gaps in the test
coverage.

For details please find attached the study results. (MS Excel file)

===

[1]: /messages/by-id/CAHut+Pt6zB-YffCrMo7+ZOKn7C2yXkNYnuQTdbStEJJJXZZXaw@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v12-patch-test-coverage-20201029.xlsxapplication/vnd.openxmlformats-officedocument.spreadsheetml.sheet; name=v12-patch-test-coverage-20201029.xlsx
#76Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Peter Smith (#74)
3 attachment(s)

On Thu, Oct 29, 2020 at 11:48 AM Peter Smith <smithpb2250@gmail.com> wrote:

Hi Ajin.

Looking at v13 patches again I found a couple more review comments:

===

(1) COMMENT
File: src/backend/replication/logical/proto.c
Function: logicalrep_write_prepare
+ if (rbtxn_commit_prepared(txn))
+ flags = LOGICALREP_IS_COMMIT_PREPARED;
+ else if (rbtxn_rollback_prepared(txn))
+ flags = LOGICALREP_IS_ROLLBACK_PREPARED;
+ else
+ flags = LOGICALREP_IS_PREPARE;
+
+ /* Make sure exactly one of the expected flags is set. */
+ if (!PrepareFlagsAreValid(flags))
+ elog(ERROR, "unrecognized flags %u in prepare message", flags);

Since those flags are directly assigned, I think the subsequent if
(!PrepareFlagsAreValid(flags)) check is redundant.

===

Updated this.

(2) COMMENT
File: src/backend/replication/logical/proto.c
Function: logicalrep_write_stream_prepare
+/*
+ * Write STREAM PREPARE to the output stream.
+ * (For stream PREPARE, stream COMMIT PREPARED, stream ROLLBACK PREPARED)
+ */

I think the function comment is outdated because IIUC the stream
COMMIT PREPARED and stream ROLLBACK PREPARED are not being handled by
the function logicalrep_write_prepare. SInce this approach seems
counter-intuitive there needs to be an improved function comment to
explain what is going on.

===

(3) COMMENT
File: src/backend/replication/logical/proto.c
Function: logicalrep_read_stream_prepare
+/*
+ * Read STREAM PREPARE from the output stream.
+ * (For stream PREPARE, stream COMMIT PREPARED, stream ROLLBACK PREPARED)
+ */

This is the same as the previous review comment. The function comment
needs to explain the new handling for stream COMMIT PREPARED and
stream ROLLBACK PREPARED.

===

I think that these functions only writing/reading STREAM PREPARE as
the name suggests is more intuitive. Maybe the usage of flags is more
confusing. More below.

(4) COMMENT
File: src/backend/replication/logical/proto.c
Function: logicalrep_read_stream_prepare
+TransactionId
+logicalrep_read_stream_prepare(StringInfo in, LogicalRepPrepareData
*prepare_data)
+{
+ TransactionId xid;
+ uint8 flags;
+
+ xid = pq_getmsgint(in, 4);
+
+ /* read flags */
+ flags = pq_getmsgbyte(in);
+
+ if (!PrepareFlagsAreValid(flags))
+ elog(ERROR, "unrecognized flags %u in prepare message", flags);

I think the logicalrep_write_stream_prepare now can only assign the
flags = LOGICALREP_IS_PREPARE. So that means the check here for bad
flags should be changed to match.
BEFORE: if (!PrepareFlagsAreValid(flags))
AFTER: if (flags != LOGICALREP_IS_PREPARE)

===

Updated.

(5) COMMENT
General
Since the COMMENTs (2), (3) and (4) are all caused by the refactoring
that was done for removal of the commit/rollback stream callbacks. I
do wonder if it might have worked out better just to leave the
logicalrep_read/write_stream_prepared as it was instead of mixing up
stream/no-stream handling. A check for stream/no-stream could possibly
have been made higher up.

For example:
static void
pgoutput_commit_prepared_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
XLogRecPtr prepare_lsn)
{
OutputPluginUpdateProgress(ctx);

OutputPluginPrepareWrite(ctx, true);
if (ctx->streaming)
logicalrep_write_stream_prepare(ctx->out, txn, prepare_lsn);
else
logicalrep_write_prepare(ctx->out, txn, prepare_lsn);
OutputPluginWrite(ctx, true);
}

===

I think I'll keep this as such for now. Amit was talking about
considering removal of flags to overload PREPARE with COMMIT PREPARED
and ROLLBACK PREPARED. Separate functions for each.
Will wait if Amit thinks that is the way to go.

I've also added a new test case for test_decoding for streaming 2PC.
Removed function ReorderBufferTxnIsPrepared as it was never called
thanks to Peter's coverage report. And added stream_prepare to the
list of callbacks that would
enable two-phase commits.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v14-0001-Support-2PC-txn-base.patchapplication/octet-stream; name=v14-0001-Support-2PC-txn-base.patch
v14-0002-Support-2PC-txn-backend-and-tests.patchapplication/octet-stream; name=v14-0002-Support-2PC-txn-backend-and-tests.patch
v14-0003-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v14-0003-Support-2PC-txn-pgoutput.patch
#77Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#71)

On Tue, Oct 27, 2020 at 3:25 PM Ajin Cherian <itsajin@gmail.com> wrote:

[v13 patch set]
Few comments on v13-0001-Support-2PC-txn-base. I haven't checked v14
version of patches so if you have fixed anything then ignore it.

1.
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -10,6 +10,7 @@
 #define REORDERBUFFER_H
 #include "access/htup_details.h"
+#include "access/twophase.h"
 #include "lib/ilist.h"
 #include "storage/sinval.h"
 #include "utils/hsearch.h"
@@ -174,6 +175,9 @@ typedef struct ReorderBufferChange
 #define RBTXN_IS_STREAMED         0x0010
 #define RBTXN_HAS_TOAST_INSERT    0x0020
 #define RBTXN_HAS_SPEC_INSERT     0x0040
+#define RBTXN_PREPARE             0x0080
+#define RBTXN_COMMIT_PREPARED     0x0100
+#define RBTXN_ROLLBACK_PREPARED   0x0200

/* Does the transaction have catalog changes? */
#define rbtxn_has_catalog_changes(txn) \
@@ -233,6 +237,24 @@ typedef struct ReorderBufferChange
((txn)->txn_flags & RBTXN_IS_STREAMED) != 0 \
)

+/* Has this transaction been prepared? */
+#define rbtxn_prepared(txn) \
+( \
+ ((txn)->txn_flags & RBTXN_PREPARE) != 0 \
+)
+
+/* Has this prepared transaction been committed? */
+#define rbtxn_commit_prepared(txn) \
+( \
+ ((txn)->txn_flags & RBTXN_COMMIT_PREPARED) != 0 \
+)
+
+/* Has this prepared transaction been rollbacked? */
+#define rbtxn_rollback_prepared(txn) \
+( \
+ ((txn)->txn_flags & RBTXN_ROLLBACK_PREPARED) != 0 \
+)
+

I think the above changes should be moved to the second patch. There
is no use of these macros in this patch and moreover they appear to be
out-of-place.

2.
@@ -127,6 +152,7 @@ pg_decode_startup(LogicalDecodingContext *ctx,
OutputPluginOptions *opt,
ListCell *option;
TestDecodingData *data;
bool enable_streaming = false;
+ bool enable_2pc = false;

I think it is better to name this variable as enable_two_pc or enable_twopc.

3.
+ xid = strtoul(strVal(elem->arg), NULL, 0);
+ if (xid == 0 || errno != 0)
+ data->check_xid_aborted = InvalidTransactionId;
+ else
+ data->check_xid_aborted = (TransactionId)xid;
+
+ if (!TransactionIdIsValid(data->check_xid_aborted))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("check-xid-aborted is not a valid xid: \"%s\"",
+ strVal(elem->arg))));

Can't we write this as below and get rid of xid variable:
data->check_xid_aborted= (TransactionId) strtoul(strVal(elem->arg), NULL, 0);
if (!TransactionIdIsValid(data->check_xid_aborted) || errno)
ereport..

4.
+ /* if check_xid_aborted is a valid xid, then it was passed in
+ * as an option to check if the transaction having this xid would be aborted.
+ * This is to test concurrent aborts.
+ */

multi-line comments have the first line as empty.

5.
+     <para>
+      The required <function>prepare_cb</function> callback is called whenever
+      a transaction which is prepared for two-phase commit has been
+      decoded. The <function>change_cb</function> callbacks for all modified
+      rows will have been called before this, if there have been any modified
+      rows.
+<programlisting>
+typedef void (*LogicalDecodePrepareCB) (struct LogicalDecodingContext *ctx,
+                                        ReorderBufferTXN *txn,
+                                        XLogRecPtr prepare_lsn);
+</programlisting>
+     </para>
+    </sect3>
+
+    <sect3 id="logicaldecoding-output-plugin-commit-prepared">
+     <title>Transaction Commit Prepared Callback</title>
+
+     <para>
+      The required <function>commit_prepared_cb</function> callback
is called whenever
+      a transaction commit prepared has been decoded. The
<parameter>gid</parameter> field,
+      which is part of the <parameter>txn</parameter> parameter can
be used in this
+      callback.

I think the last line "The <parameter>gid</parameter> field, which is
part of the <parameter>txn</parameter> parameter can be used in this
callback." in 'Transaction Commit Prepared Callback' should also be
present in 'Transaction Prepare Callback' as we using the same in
prepare API as well.

6.
+pg_decode_stream_prepare(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr prepare_lsn)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ if (data->skip_empty_xacts && !data->xact_wrote_changes)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+
+ if (data->include_xids)
+ appendStringInfo(ctx->out, "preparing streamed transaction TXN %u", txn->xid);
+ else
+ appendStringInfo(ctx->out, "preparing streamed transaction");

I think we should include 'gid' as well in the above messages.

7.
@@ -221,12 +235,26 @@ StartupDecodingContext(List *output_plugin_options,
  ctx->streaming = (ctx->callbacks.stream_start_cb != NULL) ||
  (ctx->callbacks.stream_stop_cb != NULL) ||
  (ctx->callbacks.stream_abort_cb != NULL) ||
+ (ctx->callbacks.stream_prepare_cb != NULL) ||
  (ctx->callbacks.stream_commit_cb != NULL) ||
  (ctx->callbacks.stream_change_cb != NULL) ||
  (ctx->callbacks.stream_message_cb != NULL) ||
  (ctx->callbacks.stream_truncate_cb != NULL);
  /*
+ * To support two-phase logical decoding, we require
prepare/commit-prepare/abort-prepare
+ * callbacks. The filter-prepare callback is optional. We however
enable two-phase logical
+ * decoding when at least one of the methods is enabled so that we
can easily identify
+ * missing methods.
+ *
+ * We decide it here, but only check it later in the wrappers.
+ */
+ ctx->twophase = (ctx->callbacks.prepare_cb != NULL) ||
+ (ctx->callbacks.commit_prepared_cb != NULL) ||
+ (ctx->callbacks.rollback_prepared_cb != NULL) ||
+ (ctx->callbacks.filter_prepare_cb != NULL);
+

I think stream_prepare_cb should be checked for the 'twophase' flag
because we won't use this unless two-phase is enabled. Am I missing
something?

--
With Regards,
Amit Kapila.

#78Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#62)

On Wed, Oct 21, 2020 at 7:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Comment: I dont think a tablesync worker will use streaming, none of
the other stream APIs check this, this might not be relevant for
stream_prepare either.

Yes, I think this is right. See pgoutput_startup where we are
disabling the streaming for init phase. But it is always good to once
test this and ensure the same.

I have tested this scenario and confirmed that even when the
subscriber is capable of streaming it does NOT do any streaming during
its tablesync phase.

Kind Regards
Peter Smith.
Fujitsu Australia

#79Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#77)
3 attachment(s)

On Thu, Oct 29, 2020 at 11:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Oct 27, 2020 at 3:25 PM Ajin Cherian <itsajin@gmail.com> wrote:

[v13 patch set]
Few comments on v13-0001-Support-2PC-txn-base. I haven't checked v14
version of patches so if you have fixed anything then ignore it.

1.
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -10,6 +10,7 @@
#define REORDERBUFFER_H
#include "access/htup_details.h"
+#include "access/twophase.h"
#include "lib/ilist.h"
#include "storage/sinval.h"
#include "utils/hsearch.h"
@@ -174,6 +175,9 @@ typedef struct ReorderBufferChange
#define RBTXN_IS_STREAMED         0x0010
#define RBTXN_HAS_TOAST_INSERT    0x0020
#define RBTXN_HAS_SPEC_INSERT     0x0040
+#define RBTXN_PREPARE             0x0080
+#define RBTXN_COMMIT_PREPARED     0x0100
+#define RBTXN_ROLLBACK_PREPARED   0x0200

/* Does the transaction have catalog changes? */
#define rbtxn_has_catalog_changes(txn) \
@@ -233,6 +237,24 @@ typedef struct ReorderBufferChange
((txn)->txn_flags & RBTXN_IS_STREAMED) != 0 \
)

+/* Has this transaction been prepared? */
+#define rbtxn_prepared(txn) \
+( \
+ ((txn)->txn_flags & RBTXN_PREPARE) != 0 \
+)
+
+/* Has this prepared transaction been committed? */
+#define rbtxn_commit_prepared(txn) \
+( \
+ ((txn)->txn_flags & RBTXN_COMMIT_PREPARED) != 0 \
+)
+
+/* Has this prepared transaction been rollbacked? */
+#define rbtxn_rollback_prepared(txn) \
+( \
+ ((txn)->txn_flags & RBTXN_ROLLBACK_PREPARED) != 0 \
+)
+

I think the above changes should be moved to the second patch. There
is no use of these macros in this patch and moreover they appear to be
out-of-place.

Moved to second patch in the patchset.

2.
@@ -127,6 +152,7 @@ pg_decode_startup(LogicalDecodingContext *ctx,
OutputPluginOptions *opt,
ListCell *option;
TestDecodingData *data;
bool enable_streaming = false;
+ bool enable_2pc = false;

I think it is better to name this variable as enable_two_pc or enable_twopc.

Renamed it to enable_twophase so that it matches with the ctx member
ctx-twophase.

3.
+ xid = strtoul(strVal(elem->arg), NULL, 0);
+ if (xid == 0 || errno != 0)
+ data->check_xid_aborted = InvalidTransactionId;
+ else
+ data->check_xid_aborted = (TransactionId)xid;
+
+ if (!TransactionIdIsValid(data->check_xid_aborted))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("check-xid-aborted is not a valid xid: \"%s\"",
+ strVal(elem->arg))));

Can't we write this as below and get rid of xid variable:
data->check_xid_aborted= (TransactionId) strtoul(strVal(elem->arg), NULL, 0);
if (!TransactionIdIsValid(data->check_xid_aborted) || errno)
ereport..

Updated. Small change so that errno is checked first.

4.
+ /* if check_xid_aborted is a valid xid, then it was passed in
+ * as an option to check if the transaction having this xid would be aborted.
+ * This is to test concurrent aborts.
+ */

multi-line comments have the first line as empty.

Updated.

5.
+     <para>
+      The required <function>prepare_cb</function> callback is called whenever
+      a transaction which is prepared for two-phase commit has been
+      decoded. The <function>change_cb</function> callbacks for all modified
+      rows will have been called before this, if there have been any modified
+      rows.
+<programlisting>
+typedef void (*LogicalDecodePrepareCB) (struct LogicalDecodingContext *ctx,
+                                        ReorderBufferTXN *txn,
+                                        XLogRecPtr prepare_lsn);
+</programlisting>
+     </para>
+    </sect3>
+
+    <sect3 id="logicaldecoding-output-plugin-commit-prepared">
+     <title>Transaction Commit Prepared Callback</title>
+
+     <para>
+      The required <function>commit_prepared_cb</function> callback
is called whenever
+      a transaction commit prepared has been decoded. The
<parameter>gid</parameter> field,
+      which is part of the <parameter>txn</parameter> parameter can
be used in this
+      callback.

I think the last line "The <parameter>gid</parameter> field, which is
part of the <parameter>txn</parameter> parameter can be used in this
callback." in 'Transaction Commit Prepared Callback' should also be
present in 'Transaction Prepare Callback' as we using the same in
prepare API as well.

Updated.

6.
+pg_decode_stream_prepare(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr prepare_lsn)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ if (data->skip_empty_xacts && !data->xact_wrote_changes)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+
+ if (data->include_xids)
+ appendStringInfo(ctx->out, "preparing streamed transaction TXN %u", txn->xid);
+ else
+ appendStringInfo(ctx->out, "preparing streamed transaction");

I think we should include 'gid' as well in the above messages.

Updated.

7.
@@ -221,12 +235,26 @@ StartupDecodingContext(List *output_plugin_options,
ctx->streaming = (ctx->callbacks.stream_start_cb != NULL) ||
(ctx->callbacks.stream_stop_cb != NULL) ||
(ctx->callbacks.stream_abort_cb != NULL) ||
+ (ctx->callbacks.stream_prepare_cb != NULL) ||
(ctx->callbacks.stream_commit_cb != NULL) ||
(ctx->callbacks.stream_change_cb != NULL) ||
(ctx->callbacks.stream_message_cb != NULL) ||
(ctx->callbacks.stream_truncate_cb != NULL);
/*
+ * To support two-phase logical decoding, we require
prepare/commit-prepare/abort-prepare
+ * callbacks. The filter-prepare callback is optional. We however
enable two-phase logical
+ * decoding when at least one of the methods is enabled so that we
can easily identify
+ * missing methods.
+ *
+ * We decide it here, but only check it later in the wrappers.
+ */
+ ctx->twophase = (ctx->callbacks.prepare_cb != NULL) ||
+ (ctx->callbacks.commit_prepared_cb != NULL) ||
+ (ctx->callbacks.rollback_prepared_cb != NULL) ||
+ (ctx->callbacks.filter_prepare_cb != NULL);
+

I think stream_prepare_cb should be checked for the 'twophase' flag
because we won't use this unless two-phase is enabled. Am I missing
something?

Was fixed in v14.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v15-0001-Support-2PC-txn-base.patchapplication/octet-stream; name=v15-0001-Support-2PC-txn-base.patch
v15-0002-Support-2PC-txn-backend-and-tests.patchapplication/octet-stream; name=v15-0002-Support-2PC-txn-backend-and-tests.patch
v15-0003-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v15-0003-Support-2PC-txn-pgoutput.patch
#80Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#79)

On Fri, Oct 30, 2020 at 2:46 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Thu, Oct 29, 2020 at 11:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

6.
+pg_decode_stream_prepare(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr prepare_lsn)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ if (data->skip_empty_xacts && !data->xact_wrote_changes)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+
+ if (data->include_xids)
+ appendStringInfo(ctx->out, "preparing streamed transaction TXN %u", txn->xid);
+ else
+ appendStringInfo(ctx->out, "preparing streamed transaction");

I think we should include 'gid' as well in the above messages.

Updated.

gid needs to be included in the case of 'include_xids' as well.

7.
@@ -221,12 +235,26 @@ StartupDecodingContext(List *output_plugin_options,
ctx->streaming = (ctx->callbacks.stream_start_cb != NULL) ||
(ctx->callbacks.stream_stop_cb != NULL) ||
(ctx->callbacks.stream_abort_cb != NULL) ||
+ (ctx->callbacks.stream_prepare_cb != NULL) ||
(ctx->callbacks.stream_commit_cb != NULL) ||
(ctx->callbacks.stream_change_cb != NULL) ||
(ctx->callbacks.stream_message_cb != NULL) ||
(ctx->callbacks.stream_truncate_cb != NULL);
/*
+ * To support two-phase logical decoding, we require
prepare/commit-prepare/abort-prepare
+ * callbacks. The filter-prepare callback is optional. We however
enable two-phase logical
+ * decoding when at least one of the methods is enabled so that we
can easily identify
+ * missing methods.
+ *
+ * We decide it here, but only check it later in the wrappers.
+ */
+ ctx->twophase = (ctx->callbacks.prepare_cb != NULL) ||
+ (ctx->callbacks.commit_prepared_cb != NULL) ||
+ (ctx->callbacks.rollback_prepared_cb != NULL) ||
+ (ctx->callbacks.filter_prepare_cb != NULL);
+

I think stream_prepare_cb should be checked for the 'twophase' flag
because we won't use this unless two-phase is enabled. Am I missing
something?

Was fixed in v14.

But you still have it in the streaming check. I don't think we need
that for the streaming case.

Few other comments on v15-0002-Support-2PC-txn-backend-and-tests:
======================================================================
1. The functions DecodeCommitPrepared and DecodeAbortPrepared have a
lot of code similar to DecodeCommit/Abort. Can we merge these
functions?

2.
DecodeCommitPrepared()
{
..
+ * If filter check present and this needs to be skipped, do a regular commit.
+ */
+ if (ctx->callbacks.filter_prepare_cb &&
+ ReorderBufferPrepareNeedSkip(ctx->reorder, xid, parsed->twophase_gid))
+ {
+ ReorderBufferCommit(ctx->reorder, xid, buf->origptr, buf->endptr,
+ commit_time, origin_id, origin_lsn);
+ }
+ else
+ {
+ ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr, buf->endptr,
+ commit_time, origin_id, origin_lsn,
+ parsed->twophase_gid, true);
+ }
+
+}

Can we expand the comment here to say why we need to do ReorderBufferCommit?

3. There are a lot of test cases in this patch which is a good thing
but can we split them into a separate patch for the time being as I
would like to focus on the core logic of the patch first. We can later
see if we need to retain all or part of those tests.

4. Please run pgindent on your patches.

--
With Regards,
Amit Kapila.

#81Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#73)

On Wed, Oct 28, 2020 at 10:50 AM Peter Smith <smithpb2250@gmail.com> wrote:

Hi Ajin.

I have re-checked the v13 patches for how my remaining review comments
have been addressed.

On Tue, Oct 27, 2020 at 8:55 PM Ajin Cherian <itsajin@gmail.com> wrote:

====================
v12-0002. File: src/backend/replication/logical/reorderbuffer.c
====================

COMMENT
Line 2401
/*
* We are here due to one of the 3 scenarios:
* 1. As part of streaming in-progress transactions
* 2. Prepare of a two-phase commit
* 3. Commit of a transaction.
*
* If we are streaming the in-progress transaction then discard the
* changes that we just streamed, and mark the transactions as
* streamed (if they contained changes), set prepared flag as false.
* If part of a prepare of a two-phase commit set the prepared flag
* as true so that we can discard changes and cleanup tuplecids.
* Otherwise, remove all the
* changes and deallocate the ReorderBufferTXN.
*/
~
The above comment is beyond my understanding. Anything you could do to
simplify it would be good.

For example, when viewing this function in isolation I have never
understood why the streaming flag and rbtxn_prepared(txn) flag are not
possible to be set at the same time?

Perhaps the code is relying on just internal knowledge of how this
helper function gets called? And if it is just that, then IMO there
really should be some Asserts in the code to give more assurance about
that. (Or maybe use completely different flags to represent those 3
scenarios instead of bending the meanings of the existing flags)

Left this for now, probably re-look at this at a later review.
But just to explain; this function is what does the main decoding of
changes of a transaction.
At what point this decoding happens is what this feature and the
streaming in-progress feature is about. As of PG13, this decoding only
happens at commit time. With the streaming of in-progress txn feature,
this began to happen (if streaming enabled) at the time when the
memory limit for decoding transactions was crossed. This 2PC feature
is supporting decoding at the time of a PREPARE transaction.
Now, if streaming is enabled and streaming has started as a result of
crossing the memory threshold, then there is no need to
again begin streaming at a PREPARE transaction as the transaction that
is being prepared has already been streamed.

I don't think this is true, think of a case where we need to send the
last set of changes along with PREPARE. In that case we need to stream
those changes at the time of PREPARE. If I am correct then as pointed
by Peter you need to change some comments and some of the assumptions
related to this you have in the patch.

Few more comments on the latest patch
(v15-0002-Support-2PC-txn-backend-and-tests)
=========================================================================
1.
@@ -274,6 +296,23 @@ DecodeXactOp(LogicalDecodingContext *ctx,
XLogRecordBuffer *buf)
DecodeAbort(ctx, buf, &parsed, xid);
break;
}
+ case XLOG_XACT_ABORT_PREPARED:
+ {

..
+
+ if (!TransactionIdIsValid(parsed.twophase_xid))
+ xid = XLogRecGetXid(r);
+ else
+ xid = parsed.twophase_xid;

I think we don't need this 'if' check here because you must have a
valid value of parsed.twophase_xid;. But, I think this will be moot if
you address the review comment in my previous email such that the
handling of XLOG_XACT_ABORT_PREPARED and XLOG_XACT_ABORT will be
combined as it is there without the patch.

2.
+DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
+   xl_xact_parsed_prepare * parsed)
+{
..
+ if (SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) ||
+ (parsed->dbId != InvalidOid && parsed->dbId != ctx->slot->data.database) ||
+ ctx->fast_forward || FilterByOrigin(ctx, origin_id))
+ return;
+

I think this check is the same as the check in DecodeCommit, so you
can write some comments to indicate the same and also why we don't
need to call ReorderBufferForget here. One more thing is to note is
even if we don't need to call ReorderBufferForget here but still we
need to execute invalidations (which are present in top-level txn) for
the reasons mentioned in ReorderBufferForget. Also, if we do this,
don't forget to update the comment atop
ReorderBufferImmediateInvalidation.

3.
+ /* This is a PREPARED transaction, part of a two-phase commit.
+ * The full cleanup will happen as part of the COMMIT PREPAREDs, so now
+ * just truncate txn by removing changes and tuple_cids
+ */
+ ReorderBufferTruncateTXN(rb, txn, true);

The first line in the multi-line comment should be empty.

--
With Regards,
Amit Kapila.

#82Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#81)

On Mon, Nov 2, 2020 at 4:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Few Comments on v15-0003-Support-2PC-txn-pgoutput
===============================================
1. This patch needs to be rebased after commit 644f0d7cc9 and requires
some adjustments accordingly.

2.
if (flags != 0)
elog(ERROR, "unrecognized flags %u in commit message", flags);

+
/* read fields */
commit_data->commit_lsn = pq_getmsgint64(in);

Spurious line.

3.
@@ -720,6 +722,7 @@ apply_handle_commit(StringInfo s)
replorigin_session_origin_timestamp = commit_data.committime;

CommitTransactionCommand();
+
pgstat_report_stat(false);

Spurious line

4.
+static void
+apply_handle_prepare_txn(LogicalRepPrepareData * prepare_data)
+{
+ Assert(prepare_data->prepare_lsn == remote_final_lsn);
+
+ /* The synchronization worker runs in single transaction. */
+ if (IsTransactionState() && !am_tablesync_worker())
+ {
+ /* End the earlier transaction and start a new one */
+ BeginTransactionBlock();
+ CommitTransactionCommand();
+ StartTransactionCommand();

There is no explanation as to why you want to end the previous
transaction and start a new one. Even if we have to do so, we first
need to call BeginTransactionBlock before CommitTransactionCommand.

5.
- * Handle STREAM COMMIT message.
+ * Common spoolfile processing.
+ * Returns how many changes were applied.
  */
-static void
-apply_handle_stream_commit(StringInfo s)
+static int
+apply_spooled_messages(TransactionId xid, XLogRecPtr lsn)
 {
- TransactionId xid;

Can we have a separate patch for this as this can be committed before
main patch. This is a refactoring required for the main patch.

6.
@@ -57,7 +63,8 @@ static void pgoutput_stream_abort(struct
LogicalDecodingContext *ctx,
 static void pgoutput_stream_commit(struct LogicalDecodingContext *ctx,
     ReorderBufferTXN *txn,
     XLogRecPtr commit_lsn);
-
+static void pgoutput_stream_prepare_txn(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr prepare_lsn);

Spurious line removal.

--
With Regards,
Amit Kapila.

#83Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#82)
3 attachment(s)

Hi Amit

I have rebased, split, and addressed (most of) the review comments of
the v15-0003 patch.

So the previous v15-0003 patch is now split into three as follows:
- v16-0001-Support-2PC-txn-spoolfile.patch
- v16-0002-Support-2PC-txn-pgoutput.patch
- v16-0003-Support-2PC-txn-subscriber-tests.patch

PSA.

Of course the previous v15-0001 and v15-0002 are still required before
applying these v16 patches. Later (v17?) we will combine these again
with what Ajin is currently working on to give the full suite of
patches which will have a consistent version number.

On Tue, Nov 3, 2020 at 4:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Few Comments on v15-0003-Support-2PC-txn-pgoutput
===============================================
1. This patch needs to be rebased after commit 644f0d7cc9 and requires
some adjustments accordingly.

Done.

2.
if (flags != 0)
elog(ERROR, "unrecognized flags %u in commit message", flags);

+
/* read fields */
commit_data->commit_lsn = pq_getmsgint64(in);

Spurious line.

Fixed.

3.
@@ -720,6 +722,7 @@ apply_handle_commit(StringInfo s)
replorigin_session_origin_timestamp = commit_data.committime;

CommitTransactionCommand();
+
pgstat_report_stat(false);

Spurious line

Fixed.

4.
+static void
+apply_handle_prepare_txn(LogicalRepPrepareData * prepare_data)
+{
+ Assert(prepare_data->prepare_lsn == remote_final_lsn);
+
+ /* The synchronization worker runs in single transaction. */
+ if (IsTransactionState() && !am_tablesync_worker())
+ {
+ /* End the earlier transaction and start a new one */
+ BeginTransactionBlock();
+ CommitTransactionCommand();
+ StartTransactionCommand();

There is no explanation as to why you want to end the previous
transaction and start a new one. Even if we have to do so, we first
need to call BeginTransactionBlock before CommitTransactionCommand.

TODO

5.
- * Handle STREAM COMMIT message.
+ * Common spoolfile processing.
+ * Returns how many changes were applied.
*/
-static void
-apply_handle_stream_commit(StringInfo s)
+static int
+apply_spooled_messages(TransactionId xid, XLogRecPtr lsn)
{
- TransactionId xid;

Can we have a separate patch for this as this can be committed before
main patch. This is a refactoring required for the main patch.

Done.

6.
@@ -57,7 +63,8 @@ static void pgoutput_stream_abort(struct
LogicalDecodingContext *ctx,
static void pgoutput_stream_commit(struct LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
XLogRecPtr commit_lsn);
-
+static void pgoutput_stream_prepare_txn(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr prepare_lsn);

Spurious line removal.

Fixed.

---

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v16-0002-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v16-0002-Support-2PC-txn-pgoutput.patch
v16-0001-Support-2PC-txn-spoolfile.patchapplication/octet-stream; name=v16-0001-Support-2PC-txn-spoolfile.patch
v16-0003-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v16-0003-Support-2PC-txn-subscriber-tests.patch
#84Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#80)

On Fri, Oct 30, 2020 at 9:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Oct 30, 2020 at 2:46 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Thu, Oct 29, 2020 at 11:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

6.
+pg_decode_stream_prepare(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr prepare_lsn)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ if (data->skip_empty_xacts && !data->xact_wrote_changes)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+
+ if (data->include_xids)
+ appendStringInfo(ctx->out, "preparing streamed transaction TXN %u", txn->xid);
+ else
+ appendStringInfo(ctx->out, "preparing streamed transaction");

I think we should include 'gid' as well in the above messages.

Updated.

gid needs to be included in the case of 'include_xids' as well.

Updated.

7.
@@ -221,12 +235,26 @@ StartupDecodingContext(List *output_plugin_options,
ctx->streaming = (ctx->callbacks.stream_start_cb != NULL) ||
(ctx->callbacks.stream_stop_cb != NULL) ||
(ctx->callbacks.stream_abort_cb != NULL) ||
+ (ctx->callbacks.stream_prepare_cb != NULL) ||
(ctx->callbacks.stream_commit_cb != NULL) ||
(ctx->callbacks.stream_change_cb != NULL) ||
(ctx->callbacks.stream_message_cb != NULL) ||
(ctx->callbacks.stream_truncate_cb != NULL);
/*
+ * To support two-phase logical decoding, we require
prepare/commit-prepare/abort-prepare
+ * callbacks. The filter-prepare callback is optional. We however
enable two-phase logical
+ * decoding when at least one of the methods is enabled so that we
can easily identify
+ * missing methods.
+ *
+ * We decide it here, but only check it later in the wrappers.
+ */
+ ctx->twophase = (ctx->callbacks.prepare_cb != NULL) ||
+ (ctx->callbacks.commit_prepared_cb != NULL) ||
+ (ctx->callbacks.rollback_prepared_cb != NULL) ||
+ (ctx->callbacks.filter_prepare_cb != NULL);
+

I think stream_prepare_cb should be checked for the 'twophase' flag
because we won't use this unless two-phase is enabled. Am I missing
something?

Was fixed in v14.

But you still have it in the streaming check. I don't think we need
that for the streaming case.

Updated.

Few other comments on v15-0002-Support-2PC-txn-backend-and-tests:
======================================================================
1. The functions DecodeCommitPrepared and DecodeAbortPrepared have a
lot of code similar to DecodeCommit/Abort. Can we merge these
functions?

Merged the two functions into DecodeCommit and DecodeAbort..

2.
DecodeCommitPrepared()
{
..
+ * If filter check present and this needs to be skipped, do a regular commit.
+ */
+ if (ctx->callbacks.filter_prepare_cb &&
+ ReorderBufferPrepareNeedSkip(ctx->reorder, xid, parsed->twophase_gid))
+ {
+ ReorderBufferCommit(ctx->reorder, xid, buf->origptr, buf->endptr,
+ commit_time, origin_id, origin_lsn);
+ }
+ else
+ {
+ ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr, buf->endptr,
+ commit_time, origin_id, origin_lsn,
+ parsed->twophase_gid, true);
+ }
+
+}

Can we expand the comment here to say why we need to do ReorderBufferCommit?

Updated.

3. There are a lot of test cases in this patch which is a good thing
but can we split them into a separate patch for the time being as I
would like to focus on the core logic of the patch first. We can later
see if we need to retain all or part of those tests.

Split the patch and created a new patch for test_decoding tests.

4. Please run pgindent on your patches.

Have not done this. Will do this, after unifiying the patchset.

regards,
Ajin Cherian
Fujitsu Australia

#85Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#81)
3 attachment(s)

On Mon, Nov 2, 2020 at 9:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Oct 28, 2020 at 10:50 AM Peter Smith <smithpb2250@gmail.com> wrote:

Hi Ajin.

I have re-checked the v13 patches for how my remaining review comments
have been addressed.

On Tue, Oct 27, 2020 at 8:55 PM Ajin Cherian <itsajin@gmail.com> wrote:

====================
v12-0002. File: src/backend/replication/logical/reorderbuffer.c
====================

COMMENT
Line 2401
/*
* We are here due to one of the 3 scenarios:
* 1. As part of streaming in-progress transactions
* 2. Prepare of a two-phase commit
* 3. Commit of a transaction.
*
* If we are streaming the in-progress transaction then discard the
* changes that we just streamed, and mark the transactions as
* streamed (if they contained changes), set prepared flag as false.
* If part of a prepare of a two-phase commit set the prepared flag
* as true so that we can discard changes and cleanup tuplecids.
* Otherwise, remove all the
* changes and deallocate the ReorderBufferTXN.
*/
~
The above comment is beyond my understanding. Anything you could do to
simplify it would be good.

For example, when viewing this function in isolation I have never
understood why the streaming flag and rbtxn_prepared(txn) flag are not
possible to be set at the same time?

Perhaps the code is relying on just internal knowledge of how this
helper function gets called? And if it is just that, then IMO there
really should be some Asserts in the code to give more assurance about
that. (Or maybe use completely different flags to represent those 3
scenarios instead of bending the meanings of the existing flags)

Left this for now, probably re-look at this at a later review.
But just to explain; this function is what does the main decoding of
changes of a transaction.
At what point this decoding happens is what this feature and the
streaming in-progress feature is about. As of PG13, this decoding only
happens at commit time. With the streaming of in-progress txn feature,
this began to happen (if streaming enabled) at the time when the
memory limit for decoding transactions was crossed. This 2PC feature
is supporting decoding at the time of a PREPARE transaction.
Now, if streaming is enabled and streaming has started as a result of
crossing the memory threshold, then there is no need to
again begin streaming at a PREPARE transaction as the transaction that
is being prepared has already been streamed.

I don't think this is true, think of a case where we need to send the
last set of changes along with PREPARE. In that case we need to stream
those changes at the time of PREPARE. If I am correct then as pointed
by Peter you need to change some comments and some of the assumptions
related to this you have in the patch.

I have changed the asserts and the comments to reflect this.

Few more comments on the latest patch
(v15-0002-Support-2PC-txn-backend-and-tests)
=========================================================================
1.
@@ -274,6 +296,23 @@ DecodeXactOp(LogicalDecodingContext *ctx,
XLogRecordBuffer *buf)
DecodeAbort(ctx, buf, &parsed, xid);
break;
}
+ case XLOG_XACT_ABORT_PREPARED:
+ {

..
+
+ if (!TransactionIdIsValid(parsed.twophase_xid))
+ xid = XLogRecGetXid(r);
+ else
+ xid = parsed.twophase_xid;

I think we don't need this 'if' check here because you must have a
valid value of parsed.twophase_xid;. But, I think this will be moot if
you address the review comment in my previous email such that the
handling of XLOG_XACT_ABORT_PREPARED and XLOG_XACT_ABORT will be
combined as it is there without the patch.

2.
+DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
+   xl_xact_parsed_prepare * parsed)
+{
..
+ if (SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) ||
+ (parsed->dbId != InvalidOid && parsed->dbId != ctx->slot->data.database) ||
+ ctx->fast_forward || FilterByOrigin(ctx, origin_id))
+ return;
+

I think this check is the same as the check in DecodeCommit, so you
can write some comments to indicate the same and also why we don't
need to call ReorderBufferForget here. One more thing is to note is
even if we don't need to call ReorderBufferForget here but still we
need to execute invalidations (which are present in top-level txn) for
the reasons mentioned in ReorderBufferForget. Also, if we do this,
don't forget to update the comment atop
ReorderBufferImmediateInvalidation.

I have updated the comments. I wasn't sure of when to execute
invalidations. Should I only
execute invalidations if this was for another database than what was
being decoded or should
I execute invalidation every time we skip? I will also have to create
a new function in reorderbuffer,c similar to ReorderBufferForget
as the txn is not available in decode.c.

3.
+ /* This is a PREPARED transaction, part of a two-phase commit.
+ * The full cleanup will happen as part of the COMMIT PREPAREDs, so now
+ * just truncate txn by removing changes and tuple_cids
+ */
+ ReorderBufferTruncateTXN(rb, txn, true);

The first line in the multi-line comment should be empty.

Updated.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v16-0002-Support-2PC-txn-backend.patchapplication/octet-stream; name=v16-0002-Support-2PC-txn-backend.patch
v16-0001-Support-2PC-txn-base.patchapplication/octet-stream; name=v16-0001-Support-2PC-txn-base.patch
v16-0003-Support-2PC-test-cases-for-test_decoding.patchapplication/octet-stream; name=v16-0003-Support-2PC-test-cases-for-test_decoding.patch
#86Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#85)

On Wed, Nov 4, 2020 at 3:01 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Mon, Nov 2, 2020 at 9:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Oct 28, 2020 at 10:50 AM Peter Smith <smithpb2250@gmail.com> wrote:
2.
+DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
+   xl_xact_parsed_prepare * parsed)
+{
..
+ if (SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) ||
+ (parsed->dbId != InvalidOid && parsed->dbId != ctx->slot->data.database) ||
+ ctx->fast_forward || FilterByOrigin(ctx, origin_id))
+ return;
+

I think this check is the same as the check in DecodeCommit, so you
can write some comments to indicate the same and also why we don't
need to call ReorderBufferForget here. One more thing is to note is
even if we don't need to call ReorderBufferForget here but still we
need to execute invalidations (which are present in top-level txn) for
the reasons mentioned in ReorderBufferForget. Also, if we do this,
don't forget to update the comment atop
ReorderBufferImmediateInvalidation.

I have updated the comments. I wasn't sure of when to execute
invalidations. Should I only
execute invalidations if this was for another database than what was
being decoded or should
I execute invalidation every time we skip?

I think so. Did there exist any such special condition in DecodeCommit
or do you have any other reason in your mind for not doing it every
time we skip? We probably might not need to execute when the database
is different (at least I can't think of a reason for the same) but I
guess this doesn't make much difference and it will keep the code
consistent with what we do in DecodeCommit.

--
With Regards,
Amit Kapila.

#87Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#86)

On Wed, Nov 4, 2020 at 9:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Nov 4, 2020 at 3:01 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Mon, Nov 2, 2020 at 9:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Oct 28, 2020 at 10:50 AM Peter Smith <smithpb2250@gmail.com> wrote:
2.
+DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
+   xl_xact_parsed_prepare * parsed)
+{
..
+ if (SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) ||
+ (parsed->dbId != InvalidOid && parsed->dbId != ctx->slot->data.database) ||
+ ctx->fast_forward || FilterByOrigin(ctx, origin_id))
+ return;
+

I think this check is the same as the check in DecodeCommit, so you
can write some comments to indicate the same and also why we don't
need to call ReorderBufferForget here. One more thing is to note is
even if we don't need to call ReorderBufferForget here but still we
need to execute invalidations (which are present in top-level txn) for
the reasons mentioned in ReorderBufferForget. Also, if we do this,
don't forget to update the comment atop
ReorderBufferImmediateInvalidation.

I have updated the comments. I wasn't sure of when to execute
invalidations. Should I only
execute invalidations if this was for another database than what was
being decoded or should
I execute invalidation every time we skip?

I think so. Did there exist any such special condition in DecodeCommit
or do you have any other reason in your mind for not doing it every
time we skip? We probably might not need to execute when the database
is different (at least I can't think of a reason for the same) but I
guess this doesn't make much difference and it will keep the code
consistent with what we do in DecodeCommit.

I was just basing it on the comments in the DecodeCommit:

* We can't just use ReorderBufferAbort() here, because we need to execute
* the transaction's invalidations. This currently won't be needed if
* we're just skipping over the transaction because currently we only do
* so during startup, to get to the first transaction the client needs. As
* we have reset the catalog caches before starting to read WAL, and we
* haven't yet touched any catalogs, there can't be anything to invalidate.
* But if we're "forgetting" this commit because it's it happened in
* another database, the invalidations might be important, because they
* could be for shared catalogs and we might have loaded data into the
* relevant syscaches.

regards,
Ajin Cherian
Fujitsu Australia

#88Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#87)

On Wed, Nov 4, 2020 at 3:46 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Wed, Nov 4, 2020 at 9:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Nov 4, 2020 at 3:01 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Mon, Nov 2, 2020 at 9:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Oct 28, 2020 at 10:50 AM Peter Smith <smithpb2250@gmail.com> wrote:
2.
+DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
+   xl_xact_parsed_prepare * parsed)
+{
..
+ if (SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) ||
+ (parsed->dbId != InvalidOid && parsed->dbId != ctx->slot->data.database) ||
+ ctx->fast_forward || FilterByOrigin(ctx, origin_id))
+ return;
+

I think this check is the same as the check in DecodeCommit, so you
can write some comments to indicate the same and also why we don't
need to call ReorderBufferForget here. One more thing is to note is
even if we don't need to call ReorderBufferForget here but still we
need to execute invalidations (which are present in top-level txn) for
the reasons mentioned in ReorderBufferForget. Also, if we do this,
don't forget to update the comment atop
ReorderBufferImmediateInvalidation.

I have updated the comments. I wasn't sure of when to execute
invalidations. Should I only
execute invalidations if this was for another database than what was
being decoded or should
I execute invalidation every time we skip?

I think so. Did there exist any such special condition in DecodeCommit
or do you have any other reason in your mind for not doing it every
time we skip? We probably might not need to execute when the database
is different (at least I can't think of a reason for the same) but I
guess this doesn't make much difference and it will keep the code
consistent with what we do in DecodeCommit.

I was just basing it on the comments in the DecodeCommit:

Okay, so it is mentioned in the comment why we need to execute
invalidations even when the database is not the same. So, we are
probably good here if we are executing the invalidations whenever we
skip to decode the prepared xact.

--
With Regards,
Amit Kapila.

#89Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#88)
3 attachment(s)

On Wed, Nov 4, 2020 at 9:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Okay, so it is mentioned in the comment why we need to execute
invalidations even when the database is not the same. So, we are
probably good here if we are executing the invalidations whenever we
skip to decode the prepared xact.

Updated to execute invalidations while skipping prepared transactions.
Also ran pgindent on the
source files with updated typedefs.
Attaching v17 with 1,2 and 3.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v17-0001-Support-2PC-txn-base.patchapplication/octet-stream; name=v17-0001-Support-2PC-txn-base.patch
v17-0002-Support-2PC-txn-backend.patchapplication/octet-stream; name=v17-0002-Support-2PC-txn-backend.patch
v17-0003-Support-2PC-test-cases-for-test_decoding.patchapplication/octet-stream; name=v17-0003-Support-2PC-test-cases-for-test_decoding.patch
#90Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#83)
6 attachment(s)
4.
+static void
+apply_handle_prepare_txn(LogicalRepPrepareData * prepare_data)
+{
+ Assert(prepare_data->prepare_lsn == remote_final_lsn);
+
+ /* The synchronization worker runs in single transaction. */
+ if (IsTransactionState() && !am_tablesync_worker())
+ {
+ /* End the earlier transaction and start a new one */
+ BeginTransactionBlock();
+ CommitTransactionCommand();
+ StartTransactionCommand();

There is no explanation as to why you want to end the previous
transaction and start a new one. Even if we have to do so, we first
need to call BeginTransactionBlock before CommitTransactionCommand.

Done

---

Also...

pgindent has been run for all patches now.

The latest of all six patches are again reunited with a common v18
version number.

PSA

Kind Regards,
Peter Smith.
Fujitsu Australia.

Attachments:

v18-0001-Support-2PC-txn-base.patchapplication/octet-stream; name=v18-0001-Support-2PC-txn-base.patch
v18-0003-Support-2PC-test-cases-for-test_decoding.patchapplication/octet-stream; name=v18-0003-Support-2PC-test-cases-for-test_decoding.patch
v18-0004-Support-2PC-txn-spoolfile.patchapplication/octet-stream; name=v18-0004-Support-2PC-txn-spoolfile.patch
v18-0005-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v18-0005-Support-2PC-txn-pgoutput.patch
v18-0002-Support-2PC-txn-backend.patchapplication/octet-stream; name=v18-0002-Support-2PC-txn-backend.patch
v18-0006-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v18-0006-Support-2PC-txn-subscriber-tests.patch
#91Masahiko Sawada
Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Peter Smith (#90)

On Mon, Nov 9, 2020 at 3:23 PM Peter Smith <smithpb2250@gmail.com> wrote:

4.
+static void
+apply_handle_prepare_txn(LogicalRepPrepareData * prepare_data)
+{
+ Assert(prepare_data->prepare_lsn == remote_final_lsn);
+
+ /* The synchronization worker runs in single transaction. */
+ if (IsTransactionState() && !am_tablesync_worker())
+ {
+ /* End the earlier transaction and start a new one */
+ BeginTransactionBlock();
+ CommitTransactionCommand();
+ StartTransactionCommand();

There is no explanation as to why you want to end the previous
transaction and start a new one. Even if we have to do so, we first
need to call BeginTransactionBlock before CommitTransactionCommand.

Done

---

Also...

pgindent has been run for all patches now.

The latest of all six patches are again reunited with a common v18
version number.

I've looked at the patches and done some tests. Here is my comment and
question I realized during testing and reviewing.

+static void
+DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
+             xl_xact_parsed_prepare *parsed)
+{
+   XLogRecPtr  origin_lsn = parsed->origin_lsn;
+   TimestampTz commit_time = parsed->origin_timestamp;
 static void
 DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
-           xl_xact_parsed_abort *parsed, TransactionId xid)
+           xl_xact_parsed_abort *parsed, TransactionId xid, bool prepared)
 {
    int         i;
+   XLogRecPtr  origin_lsn = InvalidXLogRecPtr;
+   TimestampTz commit_time = 0;
+   XLogRecPtr  origin_id = XLogRecGetOrigin(buf->record);
-   for (i = 0; i < parsed->nsubxacts; i++)
+   if (parsed->xinfo & XACT_XINFO_HAS_ORIGIN)
    {
-       ReorderBufferAbort(ctx->reorder, parsed->subxacts[i],
-                          buf->record->EndRecPtr);
+       origin_lsn = parsed->origin_lsn;
+       commit_time = parsed->origin_timestamp;
    }

In the above two changes, parsed->origin_timestamp is used as
commit_time. But in DecodeCommit() we use parsed->xact_time instead.
Therefore it a transaction didn't have replorigin_session_origin the
timestamp of logical decoding out generated by test_decoding with
'include-timestamp' option is invalid. Is it intentional?

---
+   if (is_commit)
+       txn->txn_flags |= RBTXN_COMMIT_PREPARED;
+   else
+       txn->txn_flags |= RBTXN_ROLLBACK_PREPARED;
+
+   if (rbtxn_commit_prepared(txn))
+       rb->commit_prepared(rb, txn, commit_lsn);
+   else if (rbtxn_rollback_prepared(txn))
+       rb->rollback_prepared(rb, txn, commit_lsn);

RBTXN_COMMIT_PREPARED and RBTXN_ROLLBACK_PREPARED are used only here
and it seems to me that it's not necessarily necessary.

---
+               /*
+                * If this is COMMIT_PREPARED and the output plugin supports
+                * two-phase commits then set the prepared flag to true.
+                */
+               prepared = ((info == XLOG_XACT_COMMIT_PREPARED) &&
ctx->twophase) ? true : false;

We can write instead:

prepared = ((info == XLOG_XACT_COMMIT_PREPARED) && ctx->twophase);

+               /*
+                * If this is ABORT_PREPARED and the output plugin supports
+                * two-phase commits then set the prepared flag to true.
+                */
+               prepared = ((info == XLOG_XACT_ABORT_PREPARED) &&
ctx->twophase) ? true : false;

The same is true here.

---
'git show --check' of v18-0002 reports some warnings.

Regards,

--
Masahiko Sawada
EnterpriseDB: https://www.enterprisedb.com/

#92Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#91)

On Mon, Nov 9, 2020 at 1:38 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Mon, Nov 9, 2020 at 3:23 PM Peter Smith <smithpb2250@gmail.com> wrote:

I've looked at the patches and done some tests. Here is my comment and
question I realized during testing and reviewing.

+static void
+DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
+             xl_xact_parsed_prepare *parsed)
+{
+   XLogRecPtr  origin_lsn = parsed->origin_lsn;
+   TimestampTz commit_time = parsed->origin_timestamp;
static void
DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
-           xl_xact_parsed_abort *parsed, TransactionId xid)
+           xl_xact_parsed_abort *parsed, TransactionId xid, bool prepared)
{
int         i;
+   XLogRecPtr  origin_lsn = InvalidXLogRecPtr;
+   TimestampTz commit_time = 0;
+   XLogRecPtr  origin_id = XLogRecGetOrigin(buf->record);
-   for (i = 0; i < parsed->nsubxacts; i++)
+   if (parsed->xinfo & XACT_XINFO_HAS_ORIGIN)
{
-       ReorderBufferAbort(ctx->reorder, parsed->subxacts[i],
-                          buf->record->EndRecPtr);
+       origin_lsn = parsed->origin_lsn;
+       commit_time = parsed->origin_timestamp;
}

In the above two changes, parsed->origin_timestamp is used as
commit_time. But in DecodeCommit() we use parsed->xact_time instead.
Therefore it a transaction didn't have replorigin_session_origin the
timestamp of logical decoding out generated by test_decoding with
'include-timestamp' option is invalid. Is it intentional?

I think all three DecodePrepare/DecodeAbort/DecodeCommit should have
same handling for this with the exception that at DecodePrepare time
we can't rely on XACT_XINFO_HAS_ORIGIN but instead we need to check if
origin_timestamp is non-zero then we will overwrite commit_time with
it. Does that make sense to you?

---
+   if (is_commit)
+       txn->txn_flags |= RBTXN_COMMIT_PREPARED;
+   else
+       txn->txn_flags |= RBTXN_ROLLBACK_PREPARED;
+
+   if (rbtxn_commit_prepared(txn))
+       rb->commit_prepared(rb, txn, commit_lsn);
+   else if (rbtxn_rollback_prepared(txn))
+       rb->rollback_prepared(rb, txn, commit_lsn);

RBTXN_COMMIT_PREPARED and RBTXN_ROLLBACK_PREPARED are used only here
and it seems to me that it's not necessarily necessary.

+1.

---
+               /*
+                * If this is COMMIT_PREPARED and the output plugin supports
+                * two-phase commits then set the prepared flag to true.
+                */
+               prepared = ((info == XLOG_XACT_COMMIT_PREPARED) &&
ctx->twophase) ? true : false;

We can write instead:

prepared = ((info == XLOG_XACT_COMMIT_PREPARED) && ctx->twophase);

+               /*
+                * If this is ABORT_PREPARED and the output plugin supports
+                * two-phase commits then set the prepared flag to true.
+                */
+               prepared = ((info == XLOG_XACT_ABORT_PREPARED) &&
ctx->twophase) ? true : false;

The same is true here.

+1.

---
'git show --check' of v18-0002 reports some warnings.

I have also noticed this. Actually, I have already started making some
changes to these patches apart from what you have reported so I'll
take care of these things as well.

--
With Regards,
Amit Kapila.

#93Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#92)
2 attachment(s)

Hi.

I have re-generated new coverage reports using the current (v18) source. PSA

Note: This is the coverage reported after running only the following tests:

1. make check

2. cd contrib/test_decoding; make check

3. cd src/test/subscriber; make check

---

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v18_coverage_test_decoding.tar.gzapplication/gzip; name=v18_coverage_test_decoding.tar.gz
v18_coverage_replication.tar.gzapplication/gzip; name=v18_coverage_replication.tar.gz
#94Masahiko Sawada
Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#92)

On Mon, Nov 9, 2020 at 8:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Nov 9, 2020 at 1:38 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Mon, Nov 9, 2020 at 3:23 PM Peter Smith <smithpb2250@gmail.com> wrote:

I've looked at the patches and done some tests. Here is my comment and
question I realized during testing and reviewing.

+static void
+DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
+             xl_xact_parsed_prepare *parsed)
+{
+   XLogRecPtr  origin_lsn = parsed->origin_lsn;
+   TimestampTz commit_time = parsed->origin_timestamp;
static void
DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
-           xl_xact_parsed_abort *parsed, TransactionId xid)
+           xl_xact_parsed_abort *parsed, TransactionId xid, bool prepared)
{
int         i;
+   XLogRecPtr  origin_lsn = InvalidXLogRecPtr;
+   TimestampTz commit_time = 0;
+   XLogRecPtr  origin_id = XLogRecGetOrigin(buf->record);
-   for (i = 0; i < parsed->nsubxacts; i++)
+   if (parsed->xinfo & XACT_XINFO_HAS_ORIGIN)
{
-       ReorderBufferAbort(ctx->reorder, parsed->subxacts[i],
-                          buf->record->EndRecPtr);
+       origin_lsn = parsed->origin_lsn;
+       commit_time = parsed->origin_timestamp;
}

In the above two changes, parsed->origin_timestamp is used as
commit_time. But in DecodeCommit() we use parsed->xact_time instead.
Therefore it a transaction didn't have replorigin_session_origin the
timestamp of logical decoding out generated by test_decoding with
'include-timestamp' option is invalid. Is it intentional?

I think all three DecodePrepare/DecodeAbort/DecodeCommit should have
same handling for this with the exception that at DecodePrepare time
we can't rely on XACT_XINFO_HAS_ORIGIN but instead we need to check if
origin_timestamp is non-zero then we will overwrite commit_time with
it. Does that make sense to you?

Yeah, that makes sense to me.

'git show --check' of v18-0002 reports some warnings.

I have also noticed this. Actually, I have already started making some
changes to these patches apart from what you have reported so I'll
take care of these things as well.

Ok.

Regards,

--
Masahiko Sawada
EnterpriseDB: https://www.enterprisedb.com/

#95Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#93)
1 attachment(s)

FYI - I have cross-checked all the v18 patch code against the v18 code
coverage [1]/messages/by-id/CAHut+Pu4BpUr0GfCLqJjXc=DcaKSvjDarSN89-4W2nxBeae9hQ@mail.gmail.com resulting from running the tests.

The purpose of this study was to identify where there may be any gaps
in the testing of this patch - e.g is there some v18 code not
currently getting executed by the tests?

I found almost all of the normal (not error) code paths are getting executed.

For details please see attached the study results. (MS Excel file)

===

[1]: /messages/by-id/CAHut+Pu4BpUr0GfCLqJjXc=DcaKSvjDarSN89-4W2nxBeae9hQ@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v18-patch-test-coverage-20201110.xlsxapplication/vnd.openxmlformats-officedocument.spreadsheetml.sheet; name=v18-patch-test-coverage-20201110.xlsx
#96Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Peter Smith (#95)

I was doing some testing, and I found some issues. Two issues. The
first one, seems to be a behaviour that might be acceptable, the
second one not so much.
I was using test_decoding, not sure how this might behave with the
pg_output plugin.

Test 1:
A transaction that is immediately rollbacked after the prepare.

SET synchronous_commit = on;
SELECT 'init' FROM
pg_create_logical_replication_slot('regression_slot',
'test_decoding');
CREATE TABLE stream_test(data text);
-- consume DDL
SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,
NULL, 'include-xids', '0', 'skip-empty-xacts', '1');

BEGIN;
INSERT INTO stream_test SELECT repeat('a', 10) || g.i FROM
generate_series(1, 20) g(i);
PREPARE TRANSACTION 'test1';
ROLLBACK PREPARED 'test1';
SELECT data FROM pg_logical_slot_get_changes('regression_slot',
NULL,NULL, 'two-phase-commit', '1', 'include-xids', '0',
'skip-empty-xacts', '1', 'stream-changes', '1');
==================

Here, what is seen is that while the transaction was not decoded at
all since it was rollbacked before it could get decoded, the ROLLBACK
PREPARED is actually decoded.
The result being that the standby could get a spurious ROLLBACK
PREPARED. The current code in worker.c does handle this silently. So,
this might not be an issue.

Test 2:
A transaction that is partially streamed , is then prepared.
'
BEGIN;
INSERT INTO stream_test SELECT repeat('a', 10) || g.i FROM
generate_series(1,800) g(i);
SELECT data FROM pg_logical_slot_get_changes('regression_slot',
NULL,NULL, 'two-phase-commit', '1', 'include-xids', '0',
'skip-empty-xacts', '1', 'stream-changes', '1');
SELECT data FROM pg_logical_slot_get_changes('regression_slot',
NULL,NULL, 'two-phase-commit', '1', 'include-xids', '0',
'skip-empty-xacts', '1', 'stream-changes', '1');
PREPARE TRANSACTION 'test1';
SELECT data FROM pg_logical_slot_get_changes('regression_slot',
NULL,NULL, 'two-phase-commit', '1', 'include-xids', '0',
'skip-empty-xacts', '1', 'stream-changes', '1');
ROLLBACK PREPARED 'test1';
==========================

Here, what is seen is that the transaction is streamed twice, first
when it crosses the memory threshold and is streamed (usually only in
the 2nd pg_logical_slot_get_changes call)
and then the same transaction is streamed again after the prepare.
This cannot be right, as it would result in duplication of data on the
standby.

I will be debugging the second issue and try to arrive at a fix.

regards,
Ajin Cherian
Fujitsu Australia.

Show quoted text

On Tue, Nov 10, 2020 at 4:47 PM Peter Smith <smithpb2250@gmail.com> wrote:

FYI - I have cross-checked all the v18 patch code against the v18 code
coverage [1] resulting from running the tests.

The purpose of this study was to identify where there may be any gaps
in the testing of this patch - e.g is there some v18 code not
currently getting executed by the tests?

I found almost all of the normal (not error) code paths are getting executed.

For details please see attached the study results. (MS Excel file)

===

[1] /messages/by-id/CAHut+Pu4BpUr0GfCLqJjXc=DcaKSvjDarSN89-4W2nxBeae9hQ@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

#97Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#91)
6 attachment(s)

On Mon, Nov 9, 2020 at 1:38 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

I've looked at the patches and done some tests. Here is my comment and
question I realized during testing and reviewing.

+static void
+DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
+             xl_xact_parsed_prepare *parsed)
+{
+   XLogRecPtr  origin_lsn = parsed->origin_lsn;
+   TimestampTz commit_time = parsed->origin_timestamp;
static void
DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
-           xl_xact_parsed_abort *parsed, TransactionId xid)
+           xl_xact_parsed_abort *parsed, TransactionId xid, bool prepared)
{
int         i;
+   XLogRecPtr  origin_lsn = InvalidXLogRecPtr;
+   TimestampTz commit_time = 0;
+   XLogRecPtr  origin_id = XLogRecGetOrigin(buf->record);
-   for (i = 0; i < parsed->nsubxacts; i++)
+   if (parsed->xinfo & XACT_XINFO_HAS_ORIGIN)
{
-       ReorderBufferAbort(ctx->reorder, parsed->subxacts[i],
-                          buf->record->EndRecPtr);
+       origin_lsn = parsed->origin_lsn;
+       commit_time = parsed->origin_timestamp;
}

In the above two changes, parsed->origin_timestamp is used as
commit_time. But in DecodeCommit() we use parsed->xact_time instead.
Therefore it a transaction didn't have replorigin_session_origin the
timestamp of logical decoding out generated by test_decoding with
'include-timestamp' option is invalid. Is it intentional?

Changed as discussed.

---
+   if (is_commit)
+       txn->txn_flags |= RBTXN_COMMIT_PREPARED;
+   else
+       txn->txn_flags |= RBTXN_ROLLBACK_PREPARED;
+
+   if (rbtxn_commit_prepared(txn))
+       rb->commit_prepared(rb, txn, commit_lsn);
+   else if (rbtxn_rollback_prepared(txn))
+       rb->rollback_prepared(rb, txn, commit_lsn);

RBTXN_COMMIT_PREPARED and RBTXN_ROLLBACK_PREPARED are used only here
and it seems to me that it's not necessarily necessary.

These are used in v18-0005-Support-2PC-txn-pgoutput. So, I don't think
we can directly remove them.

---
+               /*
+                * If this is COMMIT_PREPARED and the output plugin supports
+                * two-phase commits then set the prepared flag to true.
+                */
+               prepared = ((info == XLOG_XACT_COMMIT_PREPARED) &&
ctx->twophase) ? true : false;

We can write instead:

prepared = ((info == XLOG_XACT_COMMIT_PREPARED) && ctx->twophase);

+               /*
+                * If this is ABORT_PREPARED and the output plugin supports
+                * two-phase commits then set the prepared flag to true.
+                */
+               prepared = ((info == XLOG_XACT_ABORT_PREPARED) &&
ctx->twophase) ? true : false;

The same is true here.

I have changed this code so that we can determine if the transaction
is already decoded at prepare time before calling
DecodeCommit/DecodeAbort, so these checks are gone now and I think
that makes the code look a bit cleaner.

Apart from this, I have changed v19-0001-Support-2PC-txn-base such
that it displays xid and gid consistently in all APIs. In
v19-0002-Support-2PC-txn-backend, apart from fixing the above
comments, I have rearranged the code in DecodeCommit/Abort/Prepare so
that it does only the required things (like in DecodeCommit was still
processing subtxns even when it has to just perform FinishPrepared,
also the stats were not updated properly which I have fixed.) and
added/edited the comments. Apart from 0001 and 0002, I have not
changed anything in the remaining patches.

--
With Regards,
Amit Kapila.

Attachments:

v19-0001-Support-2PC-txn-base.patchapplication/octet-stream; name=v19-0001-Support-2PC-txn-base.patch
v19-0002-Support-2PC-txn-backend.patchapplication/octet-stream; name=v19-0002-Support-2PC-txn-backend.patch
v19-0003-Support-2PC-test-cases-for-test_decoding.patchapplication/octet-stream; name=v19-0003-Support-2PC-test-cases-for-test_decoding.patch
v19-0004-Support-2PC-txn-spoolfile.patchapplication/octet-stream; name=v19-0004-Support-2PC-txn-spoolfile.patch
v19-0005-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v19-0005-Support-2PC-txn-pgoutput.patch
v19-0006-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v19-0006-Support-2PC-txn-subscriber-tests.patch
#98Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Ajin Cherian (#96)

Did some further tests on the problem I saw and I see that it does not
have anything to do with this patch. I picked code from top of head.
If I have enough changes in a transaction to initiate streaming, then
it will also stream the same changes after a commit.

BEGIN;
INSERT INTO stream_test SELECT repeat('a', 10) || g.i FROM
generate_series(1,800) g(i);
SELECT data FROM pg_logical_slot_get_changes('regression_slot',
NULL,NULL, 'two-phase-commit', '1', 'include-xids', '0',
'skip-empty-xacts', '1', 'stream-changes', '1');
** see streamed output here **
END;
SELECT data FROM pg_logical_slot_get_changes('regression_slot',
NULL,NULL, 'two-phase-commit', '1', 'include-xids', '0',
'skip-empty-xacts', '1', 'stream-changes', '1');
** see the same streamed output here **

I think this is because since the transaction has not been committed,
SnapBuildCommitTxn is not called which is what moves the
"builder->start_decoding_at", and as a result
later calls to pg_logical_slot_get_changes will start from the
previous lsn. I did do a quick test in pgoutput using pub/sub and I
dont see duplication of data there but I haven't
verified exactly what happens.

regards,
Ajin Cherian
Fujitsu Australia

#99Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Ajin Cherian (#98)
1 attachment(s)

The subscriber tests are updated to include test cases for "cascading"
pub/sub scenarios.

i.e.. NODE_A publisher => subscriber NODE_B publisher => subscriber NODE_C

PSA only the modified v20-0006 patch (the other 5 patches remain unchanged)

Kind Regards,
Peter Smith.
Fujitsu Australia.

Attachments:

v20-0006-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v20-0006-Support-2PC-txn-subscriber-tests.patch
#100Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#97)

On Wed, Nov 11, 2020 at 12:35 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
I have rearranged the code in DecodeCommit/Abort/Prepare so

that it does only the required things (like in DecodeCommit was still
processing subtxns even when it has to just perform FinishPrepared,
also the stats were not updated properly which I have fixed.) and
added/edited the comments. Apart from 0001 and 0002, I have not
changed anything in the remaining patches.

One small comment on the patch:

- DecodeCommit(ctx, buf, &parsed, xid);
+ /*
+ * If we have already decoded this transaction data then
+ * DecodeCommit doesn't need to decode it again. This is
+ * possible iff output plugin supports two-phase commits and
+ * doesn't skip the transaction at prepare time.
+ */
+ if (info == XLOG_XACT_COMMIT_PREPARED && ctx->twophase)
+ {
+ already_decoded = !(ctx->callbacks.filter_prepare_cb &&
+ ReorderBufferPrepareNeedSkip(ctx->reorder, xid, parsed.twophase_gid));
+ }
+

Just a small nitpick but the way already_decoded is assigned here is a
bit misleading. It appears that the callbacks determine if the
transaction is already decoded when in
reality the callbacks only decide if the transaction should skip two
phase commits. I think it's better to either move it to the if
condition or if that is too long then have one more variable
skip_twophase.

if (info == XLOG_XACT_COMMIT_PREPARED && ctx->twophase &&
!(ctx->callbacks.filter_prepare_cb &&
ReorderBufferPrepareNeedSkip(ctx->reorder, xid, parsed.twophase_gid)))
already_decoded = true;

OR
bool skip_twophase = false;
skip_twophase = !(ctx->callbacks.filter_prepare_cb &&
ReorderBufferPrepareNeedSkip(ctx->reorder, xid, parsed.twophase_gid));
if (info == XLOG_XACT_COMMIT_PREPARED && ctx->twophase && skip_twophase)
already_decoded = true;

regards,
Ajin Cherian
Fujitsu Australia

#101Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#100)

On Thu, Nov 12, 2020 at 2:28 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Wed, Nov 11, 2020 at 12:35 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
I have rearranged the code in DecodeCommit/Abort/Prepare so

that it does only the required things (like in DecodeCommit was still
processing subtxns even when it has to just perform FinishPrepared,
also the stats were not updated properly which I have fixed.) and
added/edited the comments. Apart from 0001 and 0002, I have not
changed anything in the remaining patches.

One small comment on the patch:

- DecodeCommit(ctx, buf, &parsed, xid);
+ /*
+ * If we have already decoded this transaction data then
+ * DecodeCommit doesn't need to decode it again. This is
+ * possible iff output plugin supports two-phase commits and
+ * doesn't skip the transaction at prepare time.
+ */
+ if (info == XLOG_XACT_COMMIT_PREPARED && ctx->twophase)
+ {
+ already_decoded = !(ctx->callbacks.filter_prepare_cb &&
+ ReorderBufferPrepareNeedSkip(ctx->reorder, xid, parsed.twophase_gid));
+ }
+

Just a small nitpick but the way already_decoded is assigned here is a
bit misleading. It appears that the callbacks determine if the
transaction is already decoded when in
reality the callbacks only decide if the transaction should skip two
phase commits. I think it's better to either move it to the if
condition or if that is too long then have one more variable
skip_twophase.

if (info == XLOG_XACT_COMMIT_PREPARED && ctx->twophase &&
!(ctx->callbacks.filter_prepare_cb &&
ReorderBufferPrepareNeedSkip(ctx->reorder, xid, parsed.twophase_gid)))
already_decoded = true;

OR
bool skip_twophase = false;
skip_twophase = !(ctx->callbacks.filter_prepare_cb &&
ReorderBufferPrepareNeedSkip(ctx->reorder, xid, parsed.twophase_gid));
if (info == XLOG_XACT_COMMIT_PREPARED && ctx->twophase && skip_twophase)
already_decoded = true;

Hmm, introducing an additional boolean variable for this doesn't seem
like a good idea neither the other alternative suggested by you. How
about if we change the comment to make it clear. How about: "If output
plugin supports two-phase commits and doesn't skip the transaction at
prepare time then we don't need to decode the transaction data at
commit prepared time as it would have already been decoded at prepare
time."?

--
With Regards,
Amit Kapila.

#102Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#96)

On Tue, Nov 10, 2020 at 4:19 PM Ajin Cherian <itsajin@gmail.com> wrote:

I was doing some testing, and I found some issues. Two issues. The
first one, seems to be a behaviour that might be acceptable, the
second one not so much.
I was using test_decoding, not sure how this might behave with the
pg_output plugin.

Test 1:
A transaction that is immediately rollbacked after the prepare.

SET synchronous_commit = on;
SELECT 'init' FROM
pg_create_logical_replication_slot('regression_slot',
'test_decoding');
CREATE TABLE stream_test(data text);
-- consume DDL
SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,
NULL, 'include-xids', '0', 'skip-empty-xacts', '1');

BEGIN;
INSERT INTO stream_test SELECT repeat('a', 10) || g.i FROM
generate_series(1, 20) g(i);
PREPARE TRANSACTION 'test1';
ROLLBACK PREPARED 'test1';
SELECT data FROM pg_logical_slot_get_changes('regression_slot',
NULL,NULL, 'two-phase-commit', '1', 'include-xids', '0',
'skip-empty-xacts', '1', 'stream-changes', '1');
==================

Here, what is seen is that while the transaction was not decoded at
all since it was rollbacked before it could get decoded, the ROLLBACK
PREPARED is actually decoded.
The result being that the standby could get a spurious ROLLBACK
PREPARED. The current code in worker.c does handle this silently. So,
this might not be an issue.

Yeah, this seems okay because it is quite possible that such a
Rollback would have encountered after processing few records in which
case sending the Rollback is required. This can happen when rollback
has been issues concurrently when we are decoding prepare. If the
Output plugin wants, then can detect that transaction has not written
any data and ignore rollback and we already do something similar in
test_decoding. So, I think this should be fine.

--
With Regards,
Amit Kapila.

#103Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#101)

On Fri, Nov 13, 2020 at 9:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Nov 12, 2020 at 2:28 PM Ajin Cherian <itsajin@gmail.com> wrote:

Hmm, introducing an additional boolean variable for this doesn't seem
like a good idea neither the other alternative suggested by you. How
about if we change the comment to make it clear. How about: "If output
plugin supports two-phase commits and doesn't skip the transaction at
prepare time then we don't need to decode the transaction data at
commit prepared time as it would have already been decoded at prepare
time."?

Yes, that works for me.

regards,
Ajin Cherian
Fujitsu Australia

#104Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#98)

On Wed, Nov 11, 2020 at 4:30 PM Ajin Cherian <itsajin@gmail.com> wrote:

Did some further tests on the problem I saw and I see that it does not
have anything to do with this patch. I picked code from top of head.
If I have enough changes in a transaction to initiate streaming, then
it will also stream the same changes after a commit.

BEGIN;
INSERT INTO stream_test SELECT repeat('a', 10) || g.i FROM
generate_series(1,800) g(i);
SELECT data FROM pg_logical_slot_get_changes('regression_slot',
NULL,NULL, 'two-phase-commit', '1', 'include-xids', '0',
'skip-empty-xacts', '1', 'stream-changes', '1');
** see streamed output here **
END;
SELECT data FROM pg_logical_slot_get_changes('regression_slot',
NULL,NULL, 'two-phase-commit', '1', 'include-xids', '0',
'skip-empty-xacts', '1', 'stream-changes', '1');
** see the same streamed output here **

I think this is because since the transaction has not been committed,
SnapBuildCommitTxn is not called which is what moves the
"builder->start_decoding_at", and as a result
later calls to pg_logical_slot_get_changes will start from the
previous lsn.

No, we always move start_decoding_at after streaming changes. It will
be moved because we have advanced the confirmed_flush location after
streaming all the changes (via LogicalConfirmReceivedLocation()) which
will be used to set 'start_decoding_at' when we create decoding
context (CreateDecodingContext) next time. However, we don't advance
'restart_lsn' due to which it start from the same point and accumulate
all changes for transaction each time. Now, after Commit we get an
extra record which is ahead of 'start_decoding_at' and we try to
decode it, it will get all the changes of the transaction. It might be
that we update the documentation for pg_logical_slot_get_changes() to
indicate the same but I don't think this is a problem.

I did do a quick test in pgoutput using pub/sub and I
dont see duplication of data there but I haven't
verified exactly what happens.

Yeah, because we always move ahead for WAL locations in that unless
the subscriber/publisher is restarted in which case it should start
from the required location. But still, we can try to see if there is
any bug.

--
With Regards,
Amit Kapila.

#105Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#104)
6 attachment(s)

Updated with a new test case
(contrib/test_decoding/t/002_twophase-streaming.pl) that tests
concurrent aborts during streaming prepare. Had to make a few changes
to the test_decoding stream_start callbacks to handle
"check-xid-aborted"
the same way it was handled in the non stream callbacks. Merged
Peter's v20-0006 as well.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v20-0001-Support-2PC-txn-base.patchapplication/octet-stream; name=v20-0001-Support-2PC-txn-base.patch
v20-0004-Support-2PC-txn-spoolfile.patchapplication/octet-stream; name=v20-0004-Support-2PC-txn-spoolfile.patch
v20-0005-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v20-0005-Support-2PC-txn-pgoutput.patch
v20-0003-Support-2PC-test-cases-for-test_decoding.patchapplication/octet-stream; name=v20-0003-Support-2PC-test-cases-for-test_decoding.patch
v20-0002-Support-2PC-txn-backend.patchapplication/octet-stream; name=v20-0002-Support-2PC-txn-backend.patch
v20-0006-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v20-0006-Support-2PC-txn-subscriber-tests.patch
#106Masahiko Sawada
Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Ajin Cherian (#105)

On Mon, Nov 16, 2020 at 4:25 PM Ajin Cherian <itsajin@gmail.com> wrote:

Updated with a new test case
(contrib/test_decoding/t/002_twophase-streaming.pl) that tests
concurrent aborts during streaming prepare. Had to make a few changes
to the test_decoding stream_start callbacks to handle
"check-xid-aborted"
the same way it was handled in the non stream callbacks. Merged
Peter's v20-0006 as well.

Thank you for updating the patch.

I have a question about the timestamp of PREPARE on a subscriber node,
although this may have already been discussed.

With the current patch, the timestamps of PREPARE are different
between the publisher and the subscriber but the timestamp of their
commits are the same. For example,

-- There is 1 prepared transaction on a publisher node.
=# select * from pg_prepared_xact;

transaction | gid | prepared | owner | database
-------------+-----+-------------------------------+----------+----------
510 | h1 | 2020-11-16 16:57:13.438633+09 | masahiko | postgres
(1 row)

-- This prepared transaction is replicated to a subscriber.
=# select * from pg_prepared_xact;

transaction | gid | prepared | owner | database
-------------+-----+-------------------------------+----------+----------
514 | h1 | 2020-11-16 16:57:13.440593+09 | masahiko | postgres
(1 row)

These timestamps are different. Let's commit the prepared transaction
'h1' on the publisher and check the commit timestamps on both nodes.

-- On the publisher node.
=# select pg_xact_commit_timestamp('510'::xid);

pg_xact_commit_timestamp
-------------------------------
2020-11-16 16:57:13.474275+09
(1 row)

-- Commit prepared is also replicated to the subscriber node.
=# select pg_xact_commit_timestamp('514'::xid);

pg_xact_commit_timestamp
-------------------------------
2020-11-16 16:57:13.474275+09
(1 row)

The commit timestamps are the same. At PREPARE we use the local
timestamp when PREPARE is executed as 'prepared' time while at COMMIT
PREPARED we use the origin's commit timestamp as the commit timestamp
if the commit WAL has.

This behaviour made me think a possibility that if the clock of the
publisher is behind then on subscriber node the timestamp of COMMIT
PREPARED (i.g., the return value from pg_xact_commit_timestamp())
could be smaller than the timestamp of PREPARE (i.g., 'prepared_at' in
pg_prepared_xacts). I think it would not be a critical issue but I
think it might be worth discussing the behaviour.

Regards,

--
Masahiko Sawada
EnterpriseDB: https://www.enterprisedb.com/

#107Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#106)

On Mon, Nov 16, 2020 at 3:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Mon, Nov 16, 2020 at 4:25 PM Ajin Cherian <itsajin@gmail.com> wrote:

Updated with a new test case
(contrib/test_decoding/t/002_twophase-streaming.pl) that tests
concurrent aborts during streaming prepare. Had to make a few changes
to the test_decoding stream_start callbacks to handle
"check-xid-aborted"
the same way it was handled in the non stream callbacks. Merged
Peter's v20-0006 as well.

Thank you for updating the patch.

I have a question about the timestamp of PREPARE on a subscriber node,
although this may have already been discussed.

With the current patch, the timestamps of PREPARE are different
between the publisher and the subscriber but the timestamp of their
commits are the same. For example,

-- There is 1 prepared transaction on a publisher node.
=# select * from pg_prepared_xact;

transaction | gid | prepared | owner | database
-------------+-----+-------------------------------+----------+----------
510 | h1 | 2020-11-16 16:57:13.438633+09 | masahiko | postgres
(1 row)

-- This prepared transaction is replicated to a subscriber.
=# select * from pg_prepared_xact;

transaction | gid | prepared | owner | database
-------------+-----+-------------------------------+----------+----------
514 | h1 | 2020-11-16 16:57:13.440593+09 | masahiko | postgres
(1 row)

These timestamps are different. Let's commit the prepared transaction
'h1' on the publisher and check the commit timestamps on both nodes.

-- On the publisher node.
=# select pg_xact_commit_timestamp('510'::xid);

pg_xact_commit_timestamp
-------------------------------
2020-11-16 16:57:13.474275+09
(1 row)

-- Commit prepared is also replicated to the subscriber node.
=# select pg_xact_commit_timestamp('514'::xid);

pg_xact_commit_timestamp
-------------------------------
2020-11-16 16:57:13.474275+09
(1 row)

The commit timestamps are the same. At PREPARE we use the local
timestamp when PREPARE is executed as 'prepared' time while at COMMIT
PREPARED we use the origin's commit timestamp as the commit timestamp
if the commit WAL has.

Doesn't this happen only if you set replication origins? Because
otherwise both PrepareTransaction() and
RecordTransactionCommitPrepared() used the current timestamp.

--
With Regards,
Amit Kapila.

#108Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#107)

On Tue, Nov 17, 2020 at 10:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Doesn't this happen only if you set replication origins? Because
otherwise both PrepareTransaction() and
RecordTransactionCommitPrepared() used the current timestamp.

I was also checking this, even if you set replicating origins, the
preparedTransaction will reflect the local prepare time in
pg_prepared_xacts. pg_prepared_xacts fetches this information
from GlobalTransaction data which does not store the origin_timestamp;
it only stores the prepared_at which is the local timestamp.
The WAL record does have the origin_timestamp but that is not updated
in the GlobalTransaction data structure

typedef struct xl_xact_prepare
{
uint32 magic; /* format identifier */
uint32 total_len; /* actual file length */
TransactionId xid; /* original transaction XID */
Oid database; /* OID of database it was in */
TimestampTz prepared_at; /* time of preparation */ <=== this is
local time and updated in GlobalTransaction
Oid owner; /* user running the transaction */
int32 nsubxacts; /* number of following subxact XIDs */
int32 ncommitrels; /* number of delete-on-commit rels */
int32 nabortrels; /* number of delete-on-abort rels */
int32 ninvalmsgs; /* number of cache invalidation messages */
bool initfileinval; /* does relcache init file need invalidation? */
uint16 gidlen; /* length of the GID - GID follows the header */
XLogRecPtr origin_lsn; /* lsn of this record at origin node */
TimestampTz origin_timestamp; /* time of prepare at origin node
*/ <=== this is the time at origin which is not updated in
GlobalTransaction
} xl_xact_prepare;

regards,
Ajin Cherian
Fujitsu Australia

#109Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#108)

On Tue, Nov 17, 2020 at 5:02 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Tue, Nov 17, 2020 at 10:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Doesn't this happen only if you set replication origins? Because
otherwise both PrepareTransaction() and
RecordTransactionCommitPrepared() used the current timestamp.

I was also checking this, even if you set replicating origins, the
preparedTransaction will reflect the local prepare time in
pg_prepared_xacts. pg_prepared_xacts fetches this information
from GlobalTransaction data which does not store the origin_timestamp;
it only stores the prepared_at which is the local timestamp.

Sure, but my question was does this difference in behavior happens
without replication origins in any way? The reason is that if it
occurs only with replication origins, I don't think we need to bother
about the same because that feature is not properly implemented and
not used as-is. See the discussion [1]/messages/by-id/064fab0c-915e-aede-c02e-bd4ec1f59732@2ndquadrant.com [2]/messages/by-id/188d15be-8699-c045-486a-f0439c9c2b7d@2ndquadrant.com. OTOH, if this behavior can
happen without replication origins then we might want to consider
changing it.

[1]: /messages/by-id/064fab0c-915e-aede-c02e-bd4ec1f59732@2ndquadrant.com
[2]: /messages/by-id/188d15be-8699-c045-486a-f0439c9c2b7d@2ndquadrant.com

--
With Regards,
Amit Kapila.

#110Masahiko Sawada
Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#109)

On Tue, Nov 17, 2020 at 9:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Nov 17, 2020 at 5:02 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Tue, Nov 17, 2020 at 10:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Doesn't this happen only if you set replication origins? Because
otherwise both PrepareTransaction() and
RecordTransactionCommitPrepared() used the current timestamp.

I was also checking this, even if you set replicating origins, the
preparedTransaction will reflect the local prepare time in
pg_prepared_xacts. pg_prepared_xacts fetches this information
from GlobalTransaction data which does not store the origin_timestamp;
it only stores the prepared_at which is the local timestamp.

Sure, but my question was does this difference in behavior happens
without replication origins in any way? The reason is that if it
occurs only with replication origins, I don't think we need to bother
about the same because that feature is not properly implemented and
not used as-is. See the discussion [1] [2]. OTOH, if this behavior can
happen without replication origins then we might want to consider
changing it.

Logical replication workers always have replication origins, right? Is
that what you meant 'with replication origins'?

IIUC logical replication workers always set the origin's commit
timestamp as the commit timestamp of the replicated transaction. OTOH,
the timestamp of PREPARE, ‘prepare’ of pg_prepared_xacts, always uses
the local timestamp even if the caller of PrepareTransaction() sets
replorigin_session_origin_timestamp. In terms of user-visible
timestamps of transaction operations, I think users might expect these
timestamps are matched between the origin and its subscribers. But the
pg_xact_commit_timestamp() is a function of the commit timestamp
feature whereas ‘prepare’ is a pure timestamp when the transaction is
prepared. So I’m not sure these timestamps really need to be matched,
though.

Regards,

--
Masahiko Sawada
EnterpriseDB: https://www.enterprisedb.com/

#111Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#110)

On Wed, Nov 18, 2020 at 7:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Tue, Nov 17, 2020 at 9:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Nov 17, 2020 at 5:02 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Tue, Nov 17, 2020 at 10:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Doesn't this happen only if you set replication origins? Because
otherwise both PrepareTransaction() and
RecordTransactionCommitPrepared() used the current timestamp.

I was also checking this, even if you set replicating origins, the
preparedTransaction will reflect the local prepare time in
pg_prepared_xacts. pg_prepared_xacts fetches this information
from GlobalTransaction data which does not store the origin_timestamp;
it only stores the prepared_at which is the local timestamp.

Sure, but my question was does this difference in behavior happens
without replication origins in any way? The reason is that if it
occurs only with replication origins, I don't think we need to bother
about the same because that feature is not properly implemented and
not used as-is. See the discussion [1] [2]. OTOH, if this behavior can
happen without replication origins then we might want to consider
changing it.

Logical replication workers always have replication origins, right? Is
that what you meant 'with replication origins'?

I was thinking with respect to the publisher-side but you are right
that logical apply workers always have replication origins so the
effect will be visible but I think the same should be true on
publisher without this patch as well. Say, the user has set up
replication origin via pg_replication_origin_xact_setup and provided a
value of timestamp then also the same behavior will be there.

IIUC logical replication workers always set the origin's commit
timestamp as the commit timestamp of the replicated transaction. OTOH,
the timestamp of PREPARE, ‘prepare’ of pg_prepared_xacts, always uses
the local timestamp even if the caller of PrepareTransaction() sets
replorigin_session_origin_timestamp. In terms of user-visible
timestamps of transaction operations, I think users might expect these
timestamps are matched between the origin and its subscribers. But the
pg_xact_commit_timestamp() is a function of the commit timestamp
feature whereas ‘prepare’ is a pure timestamp when the transaction is
prepared. So I’m not sure these timestamps really need to be matched,
though.

Yeah, I am not sure if it is a good idea for users to rely on this
especially if the same behavior is visible on the publisher as well.
We might want to think separately if there is a value in making
prepare-time to also rely on replorigin_session_origin_timestamp and
if so, that can be done as a separate patch. What do you think?

--
With Regards,
Amit Kapila.

#112Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#105)
6 attachment(s)

On Mon, Nov 16, 2020 at 12:55 PM Ajin Cherian <itsajin@gmail.com> wrote:

Updated with a new test case
(contrib/test_decoding/t/002_twophase-streaming.pl) that tests
concurrent aborts during streaming prepare. Had to make a few changes
to the test_decoding stream_start callbacks to handle
"check-xid-aborted"
the same way it was handled in the non stream callbacks.

Why did you make a change in stream_start API? I think it should be
*_change and *_truncate APIs because the concurrent abort can happen
while decoding any intermediate change. If you agree then you can
probably take that code into a separate function and call it from the
respective APIs.

In 0003,
contrib/test_decoding/t/002_twophase-streaming.pl | 102 +++++++++

The naming of the file seems to be inconsistent with other files. It
should be 002_twophase_streaming.pl

Other than this, please find attached rebased patch set. It needs
rebase after latest commit 9653f24ad8307f393de51e0a64d9b10a49efa6e3.

--
With Regards,
Amit Kapila.

Attachments:

v21-0001-Support-2PC-txn-base.patchapplication/octet-stream; name=v21-0001-Support-2PC-txn-base.patch
v21-0002-Support-2PC-txn-backend.patchapplication/octet-stream; name=v21-0002-Support-2PC-txn-backend.patch
v21-0003-Support-2PC-test-cases-for-test_decoding.patchapplication/octet-stream; name=v21-0003-Support-2PC-test-cases-for-test_decoding.patch
v21-0004-Support-2PC-txn-spoolfile.patchapplication/octet-stream; name=v21-0004-Support-2PC-txn-spoolfile.patch
v21-0005-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v21-0005-Support-2PC-txn-pgoutput.patch
v21-0006-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v21-0006-Support-2PC-txn-subscriber-tests.patch
#113Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#109)

Hi.

Using a tablesync debugging technique as described in another mail
thread [1]/messages/by-id/CAHut+Psprtsa4o89wtNnKLxxwXeDKAX9nNsdghT1Pv63siz+AA@mail.gmail.com[2]/messages/by-id/CAHut+Pt4PyKQCwqzQ=EFF=bpKKJD7XKt_S23F6L20ayQNxg77A@mail.gmail.com I have caused the tablesync worker to handle (e.g.
apply_dispatch) a 2PC PREPARE

This exposes a problem with the current 2PC logic because if/when the
PREPARE is processed by the tablesync worker then the txn will end up
being COMMITTED, even though the 2PC PREPARE has not yet been COMMIT
PREPARED by the publisher.

For example, below is some logging (using my patch [2]/messages/by-id/CAHut+Pt4PyKQCwqzQ=EFF=bpKKJD7XKt_S23F6L20ayQNxg77A@mail.gmail.com) which shows
this occurring:

---

[postgres@CentOS7-x64 ~]$ psql -d test_sub -p 54321 -c "CREATE
SUBSCRIPTION tap_sub CONNECTION 'host=localhost dbname=test_pub
application_name=tap_sub' PUBLICATION tap_pub;"
2020-11-18 17:00:37.394 AEDT [15885] LOG: logical decoding found
consistent point at 0/16EF840
2020-11-18 17:00:37.394 AEDT [15885] DETAIL: There are no running transactions.
2020-11-18 17:00:37.394 AEDT [15885] STATEMENT:
CREATE_REPLICATION_SLOT "tap_sub" LOGICAL pgoutput NOEXPORT_SNAPSHOT
NOTICE: created replication slot "tap_sub" on publisher
CREATE SUBSCRIPTION
2020-11-18 17:00:37.407 AEDT [15886] LOG: logical replication apply
worker for subscription "tap_sub" has started
2020-11-18 17:00:37.407 AEDT [15886] LOG: !!>> The apply worker
process has PID = 15886
2020-11-18 17:00:37.415 AEDT [15887] LOG: starting logical decoding
for slot "tap_sub"
2020-11-18 17:00:37.415 AEDT [15887] DETAIL: Streaming transactions
committing after 0/16EF878, reading WAL from 0/16EF840.
2020-11-18 17:00:37.415 AEDT [15887] STATEMENT: START_REPLICATION
SLOT "tap_sub" LOGICAL 0/0 (proto_version '2', publication_names
'"tap_pub"')
2020-11-18 17:00:37.415 AEDT [15887] LOG: logical decoding found
consistent point at 0/16EF840
2020-11-18 17:00:37.415 AEDT [15887] DETAIL: There are no running transactions.
2020-11-18 17:00:37.415 AEDT [15887] STATEMENT: START_REPLICATION
SLOT "tap_sub" LOGICAL 0/0 (proto_version '2', publication_names
'"tap_pub"')
2020-11-18 17:00:37.415 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:00:37.415 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:00:37.421 AEDT [15889] LOG: logical replication table
synchronization worker for subscription "tap_sub", table "test_tab"
has started
2020-11-18 17:00:37.421 AEDT [15889] LOG: !!>> The tablesync worker
process has PID = 15889
2020-11-18 17:00:37.421 AEDT [15889] LOG: !!>>

Sleeping 30 secs. For debugging, attach to process 15889 now!

[postgres@CentOS7-x64 ~]$ 2020-11-18 17:00:38.431 AEDT [15886] LOG:
!!>> apply worker: LogicalRepApplyLoop
2020-11-18 17:00:38.431 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:00:39.433 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:00:39.433 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:00:40.437 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:00:40.437 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:00:41.439 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:00:41.439 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:00:42.441 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:00:42.441 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:00:43.442 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:00:43.442 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
-- etc.
2020-11-18 17:01:03.520 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:01:03.520 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:01:04.521 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:01:04.521 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:01:05.523 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:01:05.523 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:01:06.532 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:01:06.532 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:01:07.426 AEDT [15889] LOG: !!>> tablesync worker:
About to call LogicalRepSyncTableStart to do initial syncing
2020-11-18 17:01:07.536 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:01:07.536 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:01:07.536 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:01:07.536 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:01:08.539 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:01:08.539 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:01:09.541 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:01:09.541 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
-- etc.
2020-11-18 17:01:23.583 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:01:23.583 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:01:24.584 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:01:24.584 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:01:25.586 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:01:25.586 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:01:26.586 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:01:26.586 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:01:27.454 AEDT [17456] LOG: logical decoding found
consistent point at 0/16EF878
2020-11-18 17:01:27.454 AEDT [17456] DETAIL: There are no running transactions.
2020-11-18 17:01:27.454 AEDT [17456] STATEMENT:
CREATE_REPLICATION_SLOT "tap_sub_24582_sync_16385" TEMPORARY LOGICAL
pgoutput USE_SNAPSHOT
2020-11-18 17:01:27.456 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:01:27.457 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:01:27.465 AEDT [15889] LOG: !!>> tablesync worker: wait
for CATCHUP state notification
2020-11-18 17:01:27.465 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:01:27.465 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables

#### Here, while the tablesync worker is paused in the debugger I
execute the PREPARE txn on publisher

psql -d test_pub -c "BEGIN;INSERT INTO test_tab VALUES(1,
'foo');PREPARE TRANSACTION 'test_prepared_tab';"
PREPARE TRANSACTION

2020-11-18 17:01:54.732 AEDT [15887] LOG: !!>>
pgoutput_begin_txn
2020-11-18 17:01:54.732 AEDT [15887] CONTEXT: slot "tap_sub", output
plugin "pgoutput", in the begin callback, associated LSN 0/16EF8B0
2020-11-18 17:01:54.732 AEDT [15887] STATEMENT: START_REPLICATION
SLOT "tap_sub" LOGICAL 0/0 (proto_version '2', publication_names
'"tap_pub"')

#### And then in the debugger I let the tablesync worker continue...

2020-11-18 17:02:02.788 AEDT [15889] LOG: !!>> tablesync worker:
received CATCHUP state notification
2020-11-18 17:02:07.729 AEDT [15889] LOG: !!>> tablesync worker:
Returned from LogicalRepSyncTableStart
2020-11-18 17:02:16.284 AEDT [17456] LOG: starting logical decoding
for slot "tap_sub_24582_sync_16385"
2020-11-18 17:02:16.284 AEDT [17456] DETAIL: Streaming transactions
committing after 0/16EF8B0, reading WAL from 0/16EF878.
2020-11-18 17:02:16.284 AEDT [17456] STATEMENT: START_REPLICATION
SLOT "tap_sub_24582_sync_16385" LOGICAL 0/16EF8B0 (proto_version '2',
publication_names '"tap_pub"')
2020-11-18 17:02:16.284 AEDT [17456] LOG: logical decoding found
consistent point at 0/16EF878
2020-11-18 17:02:16.284 AEDT [17456] DETAIL: There are no running transactions.
2020-11-18 17:02:16.284 AEDT [17456] STATEMENT: START_REPLICATION
SLOT "tap_sub_24582_sync_16385" LOGICAL 0/16EF8B0 (proto_version '2',
publication_names '"tap_pub"')
2020-11-18 17:02:16.284 AEDT [17456] LOG: !!>>
pgoutput_begin_txn
2020-11-18 17:02:16.284 AEDT [17456] CONTEXT: slot
"tap_sub_24582_sync_16385", output plugin "pgoutput", in the begin
callback, associated LSN 0/16EF8B0
2020-11-18 17:02:16.284 AEDT [17456] STATEMENT: START_REPLICATION
SLOT "tap_sub_24582_sync_16385" LOGICAL 0/16EF8B0 (proto_version '2',
publication_names '"tap_pub"')
2020-11-18 17:02:40.346 AEDT [15889] LOG: !!>> tablesync worker:
LogicalRepApplyLoop

#### The tablesync worker processes the replication messages....

2020-11-18 17:02:47.992 AEDT [15889] LOG: !!>> tablesync worker:
apply_dispatch for message kind 'B'
2020-11-18 17:02:54.858 AEDT [15889] LOG: !!>> tablesync worker:
apply_dispatch for message kind 'R'
2020-11-18 17:02:56.082 AEDT [15889] LOG: !!>> tablesync worker:
apply_dispatch for message kind 'I'
2020-11-18 17:02:56.082 AEDT [15889] LOG: !!>> tablesync worker:
should_apply_changes_for_rel: true
2020-11-18 17:02:57.354 AEDT [15889] LOG: !!>> tablesync worker:
apply_dispatch for message kind 'P'
2020-11-18 17:02:57.354 AEDT [15889] LOG: !!>> tablesync worker:
called process_syncing_tables
2020-11-18 17:02:59.011 AEDT [15889] LOG: logical replication table
synchronization worker for subscription "tap_sub", table "test_tab"
has finished

#### SInce the tablesync was "ahead", the apply worker now needs to
skip those same messages
#### Notice should_apply_changes_for_rel() is false
#### Then apply worker just waits for next messages....

2020-11-18 17:02:59.064 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:02:59.064 AEDT [15886] LOG: !!>> apply worker:
apply_dispatch for message kind 'B'
2020-11-18 17:02:59.064 AEDT [15886] LOG: !!>> apply worker:
apply_dispatch for message kind 'R'
2020-11-18 17:02:59.064 AEDT [15886] LOG: !!>> apply worker:
apply_dispatch for message kind 'I'
2020-11-18 17:02:59.065 AEDT [15886] LOG: !!>> apply worker:
should_apply_changes_for_rel: false
2020-11-18 17:02:59.065 AEDT [15886] LOG: !!>> apply worker:
apply_dispatch for message kind 'P'
2020-11-18 17:02:59.067 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:02:59.067 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:03:00.071 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:03:00.071 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:03:01.073 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:03:01.073 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:03:02.075 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:03:02.075 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:03:03.080 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:03:03.080 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:03:04.081 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:03:04.082 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
2020-11-18 17:03:05.103 AEDT [15886] LOG: !!>> apply worker:
LogicalRepApplyLoop
2020-11-18 17:03:05.103 AEDT [15886] LOG: !!>> apply worker: called
process_syncing_tables
etc ...

#### At this point there is a problem because the tablesync worker has
COMMITTED that PREPARED INSERT.
#### See the subscriber node has ONE record but the publisher node has NONE!

[postgres@CentOS7-x64 ~]$ psql -d test_pub -c "SELECT count(*) FROM test_tab;"
count
-------
0
(1 row)

[postgres@CentOS7-x64 ~]$
[postgres@CentOS7-x64 ~]$ psql -d test_sub -p 54321 -c "SELECT
count(*) FROM test_tab;"
count
-------
1
(1 row)

[postgres@CentOS7-x64 ~]$

-----
[1]: /messages/by-id/CAHut+Psprtsa4o89wtNnKLxxwXeDKAX9nNsdghT1Pv63siz+AA@mail.gmail.com
[2]: /messages/by-id/CAHut+Pt4PyKQCwqzQ=EFF=bpKKJD7XKt_S23F6L20ayQNxg77A@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

#114Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#113)

On Wed, Nov 18, 2020 at 1:18 PM Peter Smith <smithpb2250@gmail.com> wrote:

Hi.

Using a tablesync debugging technique as described in another mail
thread [1][2] I have caused the tablesync worker to handle (e.g.
apply_dispatch) a 2PC PREPARE

This exposes a problem with the current 2PC logic because if/when the
PREPARE is processed by the tablesync worker then the txn will end up
being COMMITTED, even though the 2PC PREPARE has not yet been COMMIT
PREPARED by the publisher.

IIUC, this is the problem with the patch being discussed here, right?
Because before this we won't decode at Prepare time.

--
With Regards,
Amit Kapila.

#115Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#114)

On Wed, Nov 18, 2020 at 7:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Nov 18, 2020 at 1:18 PM Peter Smith <smithpb2250@gmail.com> wrote:

Hi.

Using a tablesync debugging technique as described in another mail
thread [1][2] I have caused the tablesync worker to handle (e.g.
apply_dispatch) a 2PC PREPARE

This exposes a problem with the current 2PC logic because if/when the
PREPARE is processed by the tablesync worker then the txn will end up
being COMMITTED, even though the 2PC PREPARE has not yet been COMMIT
PREPARED by the publisher.

IIUC, this is the problem with the patch being discussed here, right?
Because before this we won't decode at Prepare time.

Correct. This is new.

Kind Regards,
Peter Smith.
Fujitsu Australia.

#116Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#112)
6 attachment(s)

Why did you make a change in stream_start API? I think it should be
*_change and *_truncate APIs because the concurrent abort can happen
while decoding any intermediate change. If you agree then you can
probably take that code into a separate function and call it from the
respective APIs.

Patch 0001:
Updated this from stream_start to stream_change. I haven't updated
*_truncate as the test case written for this does not include a
truncate.
Also created a new function for this: test_concurrent_aborts().

In 0003,
contrib/test_decoding/t/002_twophase-streaming.pl | 102 +++++++++

The naming of the file seems to be inconsistent with other files. It
should be 002_twophase_streaming.pl

Patch 0003:
Changed accordingly.

Patch 0002:
I've updated a comment that got muddled up while applying pg-indent in
reorderbuffer.c

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v22-0001-Support-2PC-txn-base.patchapplication/octet-stream; name=v22-0001-Support-2PC-txn-base.patch
v22-0002-Support-2PC-txn-backend.patchapplication/octet-stream; name=v22-0002-Support-2PC-txn-backend.patch
v22-0004-Support-2PC-txn-spoolfile.patchapplication/octet-stream; name=v22-0004-Support-2PC-txn-spoolfile.patch
v22-0005-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v22-0005-Support-2PC-txn-pgoutput.patch
v22-0003-Support-2PC-test-cases-for-test_decoding.patchapplication/octet-stream; name=v22-0003-Support-2PC-test-cases-for-test_decoding.patch
v22-0006-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v22-0006-Support-2PC-txn-subscriber-tests.patch
#117Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#116)

On Thu, Nov 19, 2020 at 11:27 AM Ajin Cherian <itsajin@gmail.com> wrote:

Why did you make a change in stream_start API? I think it should be
*_change and *_truncate APIs because the concurrent abort can happen
while decoding any intermediate change. If you agree then you can
probably take that code into a separate function and call it from the
respective APIs.

Patch 0001:
Updated this from stream_start to stream_change. I haven't updated
*_truncate as the test case written for this does not include a
truncate.

I think the same check should be there in truncate as well to make the
APIs consistent and also one can use it for writing another test that
has a truncate operation.

--
With Regards,
Amit Kapila.

#118Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#117)
6 attachment(s)

On Thu, Nov 19, 2020 at 5:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I think the same check should be there in truncate as well to make the
APIs consistent and also one can use it for writing another test that
has a truncate operation.

Updated the checks in both truncate callbacks (stream and non-stream).
Also added a test case for testing concurrent aborts while decoding
streaming TRUNCATE.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v23-0004-Support-2PC-txn-spoolfile.patchapplication/octet-stream; name=v23-0004-Support-2PC-txn-spoolfile.patch
v23-0003-Support-2PC-test-cases-for-test_decoding.patchapplication/octet-stream; name=v23-0003-Support-2PC-test-cases-for-test_decoding.patch
v23-0001-Support-2PC-txn-base.patchapplication/octet-stream; name=v23-0001-Support-2PC-txn-base.patch
v23-0002-Support-2PC-txn-backend.patchapplication/octet-stream; name=v23-0002-Support-2PC-txn-backend.patch
v23-0006-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v23-0006-Support-2PC-txn-subscriber-tests.patch
v23-0005-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v23-0005-Support-2PC-txn-pgoutput.patch
#119Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#118)

On Thu, Nov 19, 2020 at 2:52 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Thu, Nov 19, 2020 at 5:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I think the same check should be there in truncate as well to make the
APIs consistent and also one can use it for writing another test that
has a truncate operation.

Updated the checks in both truncate callbacks (stream and non-stream).
Also added a test case for testing concurrent aborts while decoding
streaming TRUNCATE.

While reviewing/editing the code in 0002-Support-2PC-txn-backend, I
came across the following code which seems dubious to me.

1.
+ /*
+ * If streaming, reset the TXN so that it is allowed to stream
+ * remaining data. Streaming can also be on a prepared txn, handle
+ * it the same way.
+ */
+ if (streaming)
+ {
+ elog(LOG, "stopping decoding of %u",txn->xid);
+ ReorderBufferResetTXN(rb, txn, snapshot_now,
+   command_id, prev_lsn,
+   specinsert);
+ }
+ else
+ {
+ elog(LOG, "stopping decoding of %s (%u)",
+ txn->gid != NULL ? txn->gid : "", txn->xid);
+ ReorderBufferTruncateTXN(rb, txn, true);
+ }

Why do we need to handle the prepared txn case differently here? I
think for both cases we can call ReorderBufferResetTXN as it frees the
memory we should free in exceptions. Sure, there is some code (like
stream_stop and saving the snapshot for next run) in
ReorderBufferResetTXN which needs to be only called when we are
streaming the txn but otherwise, it seems it can be used here. We can
easily identify if the transaction is streamed to differentiate that
code path. Can you think of any other reason for not doing so?

2.
+void
+ReorderBufferFinishPrepared(ReorderBuffer *rb, TransactionId xid,
+ XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
+ TimestampTz commit_time,
+ RepOriginId origin_id, XLogRecPtr origin_lsn,
+ char *gid, bool is_commit)
+{
+ ReorderBufferTXN *txn;
+
+ /*
+ * The transaction may or may not exist (during restarts for example).
+ * Anyway, two-phase transactions do not contain any reorderbuffers. So
+ * allow it to be created below.
+ */
+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, commit_lsn,
+ true);

Why should we allow to create a new transaction here or in other words
in which cases txn won't be present? I guess this should be the case
with the earlier version of the patch where at prepare time we were
cleaning the ReorderBufferTxn.

--
With Regards,
Amit Kapila.

#120Masahiko Sawada
Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#111)

On Wed, Nov 18, 2020 at 12:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Nov 18, 2020 at 7:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Tue, Nov 17, 2020 at 9:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Nov 17, 2020 at 5:02 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Tue, Nov 17, 2020 at 10:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Doesn't this happen only if you set replication origins? Because
otherwise both PrepareTransaction() and
RecordTransactionCommitPrepared() used the current timestamp.

I was also checking this, even if you set replicating origins, the
preparedTransaction will reflect the local prepare time in
pg_prepared_xacts. pg_prepared_xacts fetches this information
from GlobalTransaction data which does not store the origin_timestamp;
it only stores the prepared_at which is the local timestamp.

Sure, but my question was does this difference in behavior happens
without replication origins in any way? The reason is that if it
occurs only with replication origins, I don't think we need to bother
about the same because that feature is not properly implemented and
not used as-is. See the discussion [1] [2]. OTOH, if this behavior can
happen without replication origins then we might want to consider
changing it.

Logical replication workers always have replication origins, right? Is
that what you meant 'with replication origins'?

I was thinking with respect to the publisher-side but you are right
that logical apply workers always have replication origins so the
effect will be visible but I think the same should be true on
publisher without this patch as well. Say, the user has set up
replication origin via pg_replication_origin_xact_setup and provided a
value of timestamp then also the same behavior will be there.

Right.

IIUC logical replication workers always set the origin's commit
timestamp as the commit timestamp of the replicated transaction. OTOH,
the timestamp of PREPARE, ‘prepare’ of pg_prepared_xacts, always uses
the local timestamp even if the caller of PrepareTransaction() sets
replorigin_session_origin_timestamp. In terms of user-visible
timestamps of transaction operations, I think users might expect these
timestamps are matched between the origin and its subscribers. But the
pg_xact_commit_timestamp() is a function of the commit timestamp
feature whereas ‘prepare’ is a pure timestamp when the transaction is
prepared. So I’m not sure these timestamps really need to be matched,
though.

Yeah, I am not sure if it is a good idea for users to rely on this
especially if the same behavior is visible on the publisher as well.
We might want to think separately if there is a value in making
prepare-time to also rely on replorigin_session_origin_timestamp and
if so, that can be done as a separate patch. What do you think?

I agree that we can think about it separately. If it's necessary we
can make a patch later.

Regards,

--
Masahiko Sawada
EnterpriseDB: https://www.enterprisedb.com/

#121Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#119)

On Fri, Nov 20, 2020 at 12:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Nov 19, 2020 at 2:52 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Thu, Nov 19, 2020 at 5:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I think the same check should be there in truncate as well to make the
APIs consistent and also one can use it for writing another test that
has a truncate operation.

Updated the checks in both truncate callbacks (stream and non-stream).
Also added a test case for testing concurrent aborts while decoding
streaming TRUNCATE.

While reviewing/editing the code in 0002-Support-2PC-txn-backend, I
came across the following code which seems dubious to me.

1.
+ /*
+ * If streaming, reset the TXN so that it is allowed to stream
+ * remaining data. Streaming can also be on a prepared txn, handle
+ * it the same way.
+ */
+ if (streaming)
+ {
+ elog(LOG, "stopping decoding of %u",txn->xid);
+ ReorderBufferResetTXN(rb, txn, snapshot_now,
+   command_id, prev_lsn,
+   specinsert);
+ }
+ else
+ {
+ elog(LOG, "stopping decoding of %s (%u)",
+ txn->gid != NULL ? txn->gid : "", txn->xid);
+ ReorderBufferTruncateTXN(rb, txn, true);
+ }

Why do we need to handle the prepared txn case differently here? I
think for both cases we can call ReorderBufferResetTXN as it frees the
memory we should free in exceptions. Sure, there is some code (like
stream_stop and saving the snapshot for next run) in
ReorderBufferResetTXN which needs to be only called when we are
streaming the txn but otherwise, it seems it can be used here. We can
easily identify if the transaction is streamed to differentiate that
code path. Can you think of any other reason for not doing so?

Yes, I agree with this that ReorderBufferResetTXN needs to be called
to free up memory after an exception.
Will change ReorderBufferResetTXN so that it now has an extra
parameter that indicates streaming; so that the stream_stop and saving
of the snapshot is only done if streaming.

2.
+void
+ReorderBufferFinishPrepared(ReorderBuffer *rb, TransactionId xid,
+ XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
+ TimestampTz commit_time,
+ RepOriginId origin_id, XLogRecPtr origin_lsn,
+ char *gid, bool is_commit)
+{
+ ReorderBufferTXN *txn;
+
+ /*
+ * The transaction may or may not exist (during restarts for example).
+ * Anyway, two-phase transactions do not contain any reorderbuffers. So
+ * allow it to be created below.
+ */
+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, commit_lsn,
+ true);

Why should we allow to create a new transaction here or in other words
in which cases txn won't be present? I guess this should be the case
with the earlier version of the patch where at prepare time we were
cleaning the ReorderBufferTxn.

Just confirmed this, yes, you are right. Even after a restart, the
transaction does get created again prior to this, We need not be
creating
it here. I will change this as well.

regards,
Ajin Cherian
Fujitsu Australia

#122Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#120)

On Fri, Nov 20, 2020 at 7:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

On Wed, Nov 18, 2020 at 12:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

IIUC logical replication workers always set the origin's commit
timestamp as the commit timestamp of the replicated transaction. OTOH,
the timestamp of PREPARE, ‘prepare’ of pg_prepared_xacts, always uses
the local timestamp even if the caller of PrepareTransaction() sets
replorigin_session_origin_timestamp. In terms of user-visible
timestamps of transaction operations, I think users might expect these
timestamps are matched between the origin and its subscribers. But the
pg_xact_commit_timestamp() is a function of the commit timestamp
feature whereas ‘prepare’ is a pure timestamp when the transaction is
prepared. So I’m not sure these timestamps really need to be matched,
though.

Yeah, I am not sure if it is a good idea for users to rely on this
especially if the same behavior is visible on the publisher as well.
We might want to think separately if there is a value in making
prepare-time to also rely on replorigin_session_origin_timestamp and
if so, that can be done as a separate patch. What do you think?

I agree that we can think about it separately. If it's necessary we
can make a patch later.

Thanks for the confirmation. Your review and suggestions are quite helpful.

--
With Regards,
Amit Kapila.

#123Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#121)

On Fri, Nov 20, 2020 at 9:12 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Fri, Nov 20, 2020 at 12:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Nov 19, 2020 at 2:52 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Thu, Nov 19, 2020 at 5:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I think the same check should be there in truncate as well to make the
APIs consistent and also one can use it for writing another test that
has a truncate operation.

Updated the checks in both truncate callbacks (stream and non-stream).
Also added a test case for testing concurrent aborts while decoding
streaming TRUNCATE.

While reviewing/editing the code in 0002-Support-2PC-txn-backend, I
came across the following code which seems dubious to me.

1.
+ /*
+ * If streaming, reset the TXN so that it is allowed to stream
+ * remaining data. Streaming can also be on a prepared txn, handle
+ * it the same way.
+ */
+ if (streaming)
+ {
+ elog(LOG, "stopping decoding of %u",txn->xid);
+ ReorderBufferResetTXN(rb, txn, snapshot_now,
+   command_id, prev_lsn,
+   specinsert);
+ }
+ else
+ {
+ elog(LOG, "stopping decoding of %s (%u)",
+ txn->gid != NULL ? txn->gid : "", txn->xid);
+ ReorderBufferTruncateTXN(rb, txn, true);
+ }

Why do we need to handle the prepared txn case differently here? I
think for both cases we can call ReorderBufferResetTXN as it frees the
memory we should free in exceptions. Sure, there is some code (like
stream_stop and saving the snapshot for next run) in
ReorderBufferResetTXN which needs to be only called when we are
streaming the txn but otherwise, it seems it can be used here. We can
easily identify if the transaction is streamed to differentiate that
code path. Can you think of any other reason for not doing so?

Yes, I agree with this that ReorderBufferResetTXN needs to be called
to free up memory after an exception.
Will change ReorderBufferResetTXN so that it now has an extra
parameter that indicates streaming; so that the stream_stop and saving
of the snapshot is only done if streaming.

I've already made the changes for this in the patch, you can verify
the same when I'll share the new version. We don't need to pass an
extra parameter rbtx_prepared()/rbtxn_is_streamed should serve the
need.

2.
+void
+ReorderBufferFinishPrepared(ReorderBuffer *rb, TransactionId xid,
+ XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
+ TimestampTz commit_time,
+ RepOriginId origin_id, XLogRecPtr origin_lsn,
+ char *gid, bool is_commit)
+{
+ ReorderBufferTXN *txn;
+
+ /*
+ * The transaction may or may not exist (during restarts for example).
+ * Anyway, two-phase transactions do not contain any reorderbuffers. So
+ * allow it to be created below.
+ */
+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, commit_lsn,
+ true);

Why should we allow to create a new transaction here or in other words
in which cases txn won't be present? I guess this should be the case
with the earlier version of the patch where at prepare time we were
cleaning the ReorderBufferTxn.

Just confirmed this, yes, you are right. Even after a restart, the
transaction does get created again prior to this, We need not be
creating
it here. I will change this as well.

I'll take care of it along with other changes.

Thanks for the confirmation.

--
With Regards,
Amit Kapila.

#124Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#123)
7 attachment(s)

On Fri, Nov 20, 2020 at 2:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I'll take care of it along with other changes.

Thanks for the confirmation.

Ok, meanwhile I've just split the patches to move out the
check_xid_aborted test cases as well as the support in the code for
this into a separate patch. New 0007 patch for this.

regards,
Ajin

Attachments:

v24-0001-Support-2PC-txn-base.patchapplication/octet-stream; name=v24-0001-Support-2PC-txn-base.patch
v24-0004-Support-2PC-txn-spoolfile.patchapplication/octet-stream; name=v24-0004-Support-2PC-txn-spoolfile.patch
v24-0005-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v24-0005-Support-2PC-txn-pgoutput.patch
v24-0002-Support-2PC-txn-backend.patchapplication/octet-stream; name=v24-0002-Support-2PC-txn-backend.patch
v24-0003-Support-2PC-test-cases-for-test_decoding.patchapplication/octet-stream; name=v24-0003-Support-2PC-test-cases-for-test_decoding.patch
v24-0007-2pc-test-cases-for-testing-concurrent-aborts.patchapplication/octet-stream; name=v24-0007-2pc-test-cases-for-testing-concurrent-aborts.patch
v24-0006-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v24-0006-Support-2PC-txn-subscriber-tests.patch
#125Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#124)
7 attachment(s)

On Fri, Nov 20, 2020 at 4:54 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Fri, Nov 20, 2020 at 2:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I'll take care of it along with other changes.

Thanks for the confirmation.

Ok, meanwhile I've just split the patches to move out the
check_xid_aborted test cases as well as the support in the code for
this into a separate patch. New 0007 patch for this.

This makes sense to me but it should have been 0004 in the series. I
have changed the order in the attached. I have updated
0002-Support-2PC-txn-backend and
0007-2pc-test-cases-for-testing-concurrent-aborts. The changes are:
1. As mentioned previously, used ReorderBufferResetTxn to deal with
concurrent aborts both in case of streamed and prepared txns.
2. There was no clear explanation as to why we are not skipping
DecodePrepare in the presence of concurrent aborts. I have added the
explanation of the same atop DecodePrepare() and at various other
palces.
3. Added/Edited comments at various places in the code and made some
other changes like simplified the code at a few places.
4. Changed the function name ReorderBufferCommitInternal to
ReorderBufferReplay as that seems more appropriate.
5. In ReorderBufferReplay()(which was previously
ReorderBufferCommitInternal), the patch was doing cleanup of TXN even
for prepared transactions which is not consistent with what we do at
other places in the patch, so changed the same.
6. In 2pc-test-cases-for-testing-concurrent-aborts, changed one of the
log message based on the changes in patch Support-2PC-txn-backend.

I am planning to continue review of these patches but I thought it is
better to check about the above changes before proceeding further. Let
me know what you think?

--
With Regards,
Amit Kapila.

Attachments:

v25-0001-Support-2PC-txn-base.patchapplication/octet-stream; name=v25-0001-Support-2PC-txn-base.patch
v25-0002-Support-2PC-txn-backend.patchapplication/octet-stream; name=v25-0002-Support-2PC-txn-backend.patch
v25-0003-Support-2PC-test-cases-for-test_decoding.patchapplication/octet-stream; name=v25-0003-Support-2PC-test-cases-for-test_decoding.patch
v25-0004-2pc-test-cases-for-testing-concurrent-aborts.patchapplication/octet-stream; name=v25-0004-2pc-test-cases-for-testing-concurrent-aborts.patch
v25-0005-Support-2PC-txn-spoolfile.patchapplication/octet-stream; name=v25-0005-Support-2PC-txn-spoolfile.patch
v25-0006-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v25-0006-Support-2PC-txn-pgoutput.patch
v25-0007-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v25-0007-Support-2PC-txn-subscriber-tests.patch
#126Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#125)

On Sun, Nov 22, 2020 at 12:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

I am planning to continue review of these patches but I thought it is
better to check about the above changes before proceeding further. Let
me know what you think?

I've had a look at the changes and done a few tests, and have no
comments. However, I did see that the test 002_twophase_streaming.pl
failed once. I've run it at least 30 times after that but haven't seen
it fail again.
Unfortunately my ulimit was not set up to create dumps and so I dont
have a dump when the test case failed. I will continue testing and
reviewing the changes.

regards,
Ajin Cherian
Fujitsu Australia

#127Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#126)

On Mon, Nov 23, 2020 at 3:41 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Sun, Nov 22, 2020 at 12:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

I am planning to continue review of these patches but I thought it is
better to check about the above changes before proceeding further. Let
me know what you think?

I've had a look at the changes and done a few tests, and have no
comments.

Okay, thanks. Additionally, I have analyzed whether we need to call
SnapbuildCommittedTxn in DecodePrepare as was raised earlier for this
patch [1]/messages/by-id/87zhxrwgvh.fsf@ars-thinkpad. As mentioned in that thread SnapbuildCommittedTxn primarily
does three things (a) Decide whether we are interested in tracking the
current txn effects and if we are, mark it as committed. (b) Build and
distribute snapshot to all RBTXNs, if it is important. (c) Set base
snap of our xact if it did DDL, to execute invalidations during
replay.

For the first two, as the xact is still not visible to others so we
don't need to make it behave like a committed txn. To make the (DDL)
changes visible to the current txn, the message
REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID copies the snapshot which
fills the subxip array. This will be sufficient to make the changes
visible to the current txn. For the third, I have checked the code
that whenever we have any change message the base snapshot gets set
via SnapBuildProcessChange. It is possible that I have missed
something but I don't want to call SnapbuildCommittedTxn in
DecodePrepare unless we have a clear reason for the same so leaving it
for now. Can you or someone see any reason for the same?

However, I did see that the test 002_twophase_streaming.pl
failed once. I've run it at least 30 times after that but haven't seen
it fail again.

This test is based on waiting to see some message in the log. It is
possible it failed due to timeout which can only happen rarely. You
can check some failure logs in test_decoding folder (probably in
tmp_check folder). Even if we get some server or test log, it can help
us to diagnose the problem.

[1]: /messages/by-id/87zhxrwgvh.fsf@ars-thinkpad

--
With Regards,
Amit Kapila.

#128Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#127)
7 attachment(s)

FYI - I have regenerated a new v26 set of patches.

PSA

v26-0001 - no change
v26-0002 - no change
v26-0003 - only filename changed (for consistency)
v26-0004 - only filename changed (for consistency)
v26-0005 - no change
v26-0006 - minor code change to have more consistently located calls
to process_syncing_tables
v26-0007 - no change

---
Kind Regards
Peter Smith.
Fujitsu Australia.

Attachments:

v26-0001-Support-2PC-txn-base.patchapplication/octet-stream; name=v26-0001-Support-2PC-txn-base.patch
v26-0002-Support-2PC-txn-backend.patchapplication/octet-stream; name=v26-0002-Support-2PC-txn-backend.patch
v26-0004-Support-2PC-txn-tests-for-concurrent-aborts.patchapplication/octet-stream; name=v26-0004-Support-2PC-txn-tests-for-concurrent-aborts.patch
v26-0003-Support-2PC-txn-tests-for-test_decoding.patchapplication/octet-stream; name=v26-0003-Support-2PC-txn-tests-for-test_decoding.patch
v26-0005-Support-2PC-txn-spoolfile.patchapplication/octet-stream; name=v26-0005-Support-2PC-txn-spoolfile.patch
v26-0006-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v26-0006-Support-2PC-txn-pgoutput.patch
v26-0007-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v26-0007-Support-2PC-txn-subscriber-tests.patch
#129Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#127)

On Mon, Nov 23, 2020 at 10:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

For the first two, as the xact is still not visible to others so we
don't need to make it behave like a committed txn. To make the (DDL)
changes visible to the current txn, the message
REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID copies the snapshot which
fills the subxip array. This will be sufficient to make the changes
visible to the current txn. For the third, I have checked the code
that whenever we have any change message the base snapshot gets set
via SnapBuildProcessChange. It is possible that I have missed
something but I don't want to call SnapbuildCommittedTxn in
DecodePrepare unless we have a clear reason for the same so leaving it
for now. Can you or someone see any reason for the same?

I reviewed and tested this and like you said, SnapBuildProcessChange
sets the base snapshot for every change.
I did various tests using DDL updates and haven't seen any issues so
far. I agree with your analysis.

regards,
Ajin

#130Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Ajin Cherian (#129)

Hi Amit.

IIUC the tablesync worker runs in a single transaction.

Last week I discovered and described [1]/messages/by-id/CAHut+PuEMk4SO8oGzxc_ftzPkGA8uC-y5qi-KRqHSy_P0i30DA@mail.gmail.com a problem where/if (by
unlucky timing) the tablesync worker gets to handle the 2PC PREPARE
TRANSACTION then that whole single tx is getting committed, regardless
that a COMMIT PREPARED was not even been executed yet. i.e. It means
if the publisher subsequently does a ROLLBACK PREPARED then the table
records on Pub/Sub nodes will no longer be matching.

AFAIK this is a new problem for the current WIP patch because prior to
this the PREPARE had no decoding.

Please let me know if this issue description is still not clear.

Did you have any thoughts how we might address this issue?

---

[1]: /messages/by-id/CAHut+PuEMk4SO8oGzxc_ftzPkGA8uC-y5qi-KRqHSy_P0i30DA@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

#131Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#130)

On Wed, Nov 25, 2020 at 12:54 PM Peter Smith <smithpb2250@gmail.com> wrote:

Hi Amit.

IIUC the tablesync worker runs in a single transaction.

Last week I discovered and described [1] a problem where/if (by
unlucky timing) the tablesync worker gets to handle the 2PC PREPARE
TRANSACTION then that whole single tx is getting committed, regardless
that a COMMIT PREPARED was not even been executed yet. i.e. It means
if the publisher subsequently does a ROLLBACK PREPARED then the table
records on Pub/Sub nodes will no longer be matching.

AFAIK this is a new problem for the current WIP patch because prior to
this the PREPARE had no decoding.

Please let me know if this issue description is still not clear.

Did you have any thoughts how we might address this issue?

I think we need to disable two_phase_commit for table sync workers. We
anyway wanted to expose a parater via subscription for that and we can
use that to do it. Also, there were some other comments [1]/messages/by-id/87zhxrwgvh.fsf@ars-thinkpad related to
tablesync worker w.r.t prepared transactions which would possibly be
addressed by doing it. Kindly check those comments [1]/messages/by-id/87zhxrwgvh.fsf@ars-thinkpad and let me know
if anything additional is required.

[1]: /messages/by-id/87zhxrwgvh.fsf@ars-thinkpad

--
With Regards,
Amit Kapila.

#132Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#129)
7 attachment(s)

On Tue, Nov 24, 2020 at 3:29 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Mon, Nov 23, 2020 at 10:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

For the first two, as the xact is still not visible to others so we
don't need to make it behave like a committed txn. To make the (DDL)
changes visible to the current txn, the message
REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID copies the snapshot which
fills the subxip array. This will be sufficient to make the changes
visible to the current txn. For the third, I have checked the code
that whenever we have any change message the base snapshot gets set
via SnapBuildProcessChange. It is possible that I have missed
something but I don't want to call SnapbuildCommittedTxn in
DecodePrepare unless we have a clear reason for the same so leaving it
for now. Can you or someone see any reason for the same?

I reviewed and tested this and like you said, SnapBuildProcessChange
sets the base snapshot for every change.
I did various tests using DDL updates and haven't seen any issues so
far. I agree with your analysis.

Thanks, attached is a further revised version of the patch series.

Changes in v27-0001-Extend-the-output-plugin-API-to-allow-decoding-p
a. Removed the includes which are not required by this patch.
b. Moved the 'check_xid_aborted' parameter to 0004.
c. Added Assert(!ctx->fast_forward); in callback wrappers, because we
won't load the output plugin when fast_forward is set so there is no
chance that we call output plugin APIs. This is why we have this
Assert in all the existing APIs.
d. Adjusted the order of various callback APIs to make the code look consistent.
e. Added/Edited comments and doc updates at various places. Changed
error messages to make them consistent with other similar messages.
f. Some other cosmetic changes like the removal of spurious new lines
and fixed white-space issues.
g. Updated commit message.

Changes in v27-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer
a. Move the check to whether a particular txn can be skipped into a
separate function as the same code for it was repeated at three
different places.
b. ReorderBufferPrepare has a parameter name as commit_lsn whereas it
should be preapre_lsn. Similar changes has been made at various places
in the patch.
c. filter_prepare_cb callback existence is checked in both decode.c
and in filter_prepare_cb_wrapper. Fixed by removing it from decode.c.
d. Fixed miscellaneous comments and some cosmetic changes.
e. Moved the special elog in ReorderBufferProcessTxn to test
concurrent aborts in 0004 patch.
f. Moved the changes related to flags RBTXN_COMMIT_PREPARED and
RBTXN_ROLLBACK_PREPARED to patch 0006 as those are used only in that
patch.
g. Updated commit message.

One problem with this patch is: What if we have assembled a consistent
snapshot after prepare and before commit prepared. In that case, it
will currently just send commit prepared record which would be a bad
idea. It should decode the entire transaction for such cases at commit
prepared time. This same problem is noticed by Arseny Sher, see
problem-2 in email [1]/messages/by-id/877el38j56.fsf@ars-thinkpad.

One idea to fix this could be to check if the snapshot is consistent
to decide whether to skip the prepare and if we skip due to that
reason, then during commit we need to decode the entire transaction.
We can do that by setting a flag in txn->txn_flags such that during
prepare we can set a flag when we skip the prepare because the
snapshot is still not consistent and then used it during commit to see
if we need to decode the entire transaction. But here we need to think
about what would happen after restart? Basically, if it is possible
that after restart the snapshot is consistent for the same transaction
at prepare time and it got skipped due to start_decoding_at (which
moved ahead after restart) then such a solution won't work. Any
thoughts on this?

v27-0004-Support-2PC-txn-tests-for-concurrent-aborts
a. Moved the changes related to testing of concurrent aborts in this
patch from other patches.

v27-0006-Support-2PC-txn-pgoutput
a. Moved the changes related to flags RBTXN_COMMIT_PREPARED and
RBTXN_ROLLBACK_PREPARED from other patch.
b. Included headers required by this patch, previously it seems to be
dependent on other patches for this.

The other patches remain unchanged.

Let me know what you think about these changes?

[1]: /messages/by-id/877el38j56.fsf@ars-thinkpad

--
With Regards,
Amit Kapila.

Attachments:

v27-0001-Extend-the-output-plugin-API-to-allow-decoding-p.patchapplication/octet-stream; name=v27-0001-Extend-the-output-plugin-API-to-allow-decoding-p.patch
v27-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer.patchapplication/octet-stream; name=v27-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer.patch
v27-0003-Support-2PC-txn-tests-for-test_decoding.patchapplication/octet-stream; name=v27-0003-Support-2PC-txn-tests-for-test_decoding.patch
v27-0004-Support-2PC-txn-tests-for-concurrent-aborts.patchapplication/octet-stream; name=v27-0004-Support-2PC-txn-tests-for-concurrent-aborts.patch
v27-0005-Support-2PC-txn-spoolfile.patchapplication/octet-stream; name=v27-0005-Support-2PC-txn-spoolfile.patch
v27-0006-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v27-0006-Support-2PC-txn-pgoutput.patch
v27-0007-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v27-0007-Support-2PC-txn-subscriber-tests.patch
#133Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#132)
8 attachment(s)

On Wed, Nov 25, 2020 at 11:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

The other patches remain unchanged.

Let me know what you think about these changes?

Thanks, I will look at the patch and let you know my thoughts on it.
Before that, sharing a new patchset with an additional patch that
includes documentation changes for
two-phase commit support in Logical decoding. I have also updated the
example section of Logical Decoding with examples that use two-phase
commits.
I have just added the documentation patch as the 8th one and renamed
the other patches, not changed anything in them,

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v28-0001-Extend-the-output-plugin-API-to-allow-decoding-p.patchapplication/octet-stream; name=v28-0001-Extend-the-output-plugin-API-to-allow-decoding-p.patch
v28-0004-Support-2PC-txn-tests-for-concurrent-aborts.patchapplication/octet-stream; name=v28-0004-Support-2PC-txn-tests-for-concurrent-aborts.patch
v28-0005-Support-2PC-txn-spoolfile.patchapplication/octet-stream; name=v28-0005-Support-2PC-txn-spoolfile.patch
v28-0003-Support-2PC-txn-tests-for-test_decoding.patchapplication/octet-stream; name=v28-0003-Support-2PC-txn-tests-for-test_decoding.patch
v28-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer.patchapplication/octet-stream; name=v28-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer.patch
v28-0006-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v28-0006-Support-2PC-txn-pgoutput.patch
v28-0007-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v28-0007-Support-2PC-txn-subscriber-tests.patch
v28-0008-Support-2PC-documentation.patchapplication/octet-stream; name=v28-0008-Support-2PC-documentation.patch
#134Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#132)

On Wed, Nov 25, 2020 at 11:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

One problem with this patch is: What if we have assembled a consistent
snapshot after prepare and before commit prepared. In that case, it
will currently just send commit prepared record which would be a bad
idea. It should decode the entire transaction for such cases at commit
prepared time. This same problem is noticed by Arseny Sher, see
problem-2 in email [1].

I'm not sure I understand how you could assemble a consistent snapshot
after prepare but before commit prepared?
Doesn't a consistent snapshot require that all in-progress
transactions commit? I've tried start a new subscription after
a prepare on the publisher and I see that the create subscription just
hangs till the transaction on the publisher is either committed or
rolled back.
Even if I try to create a replication slot using
pg_create_logical_replication_slot when a transaction has been
prepared but not yet committed
, it just hangs till the transaction is committed/aborted.

regards,
Ajin Cherian
Fujitsu Australia

#135Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#134)

On Thu, Nov 26, 2020 at 4:24 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Wed, Nov 25, 2020 at 11:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

One problem with this patch is: What if we have assembled a consistent
snapshot after prepare and before commit prepared. In that case, it
will currently just send commit prepared record which would be a bad
idea. It should decode the entire transaction for such cases at commit
prepared time. This same problem is noticed by Arseny Sher, see
problem-2 in email [1].

I'm not sure I understand how you could assemble a consistent snapshot
after prepare but before commit prepared?
Doesn't a consistent snapshot require that all in-progress
transactions commit?

By above, I don't mean that the transaction is not committed. I am
talking about the timing of WAL. It is possible that between WAL of
Prepare and Commit Prepared, we reach a consistent state.

I've tried start a new subscription after
a prepare on the publisher and I see that the create subscription just
hangs till the transaction on the publisher is either committed or
rolled back.

I think what you need to do to reproduce this is to follow the
snapshot machinery in SnapBuildFindSnapshot. Basically, first, start a
transaction (say transaction-id is 500) and do some operations but
don't commit. Here, if you create a slot (via subscription or
otherwise), it will wait for 500 to complete and make the state as
SNAPBUILD_BUILDING_SNAPSHOT. Here, you can commit 500 and then having
debugger in that state, start another transaction (say 501), do some
operations but don't commit. Next time when you reach this function,
it will change the state to SNAPBUILD_FULL_SNAPSHOT and wait for 501,
now you can start another transaction (say 502) which you can prepare
but don't commit. Again start one more transaction 503, do some ops,
commit both 501 and 503. At this stage somehow we need to ensure that
XLOG_RUNNING_XACTS record. Then commit prepared 502. Now, I think you
should notice that the consistent point is reached after 502's prepare
and before its commit. Now, this is just a theoretical scenario, you
need something on these lines and probably a way to force
XLOG_RUNNING_XACTS WAL (probably via debugger or some other way) at
the right times to reproduce it.

Thanks for trying to build a test case for this, it is really helpful.

--
With Regards,
Amit Kapila.

#136Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#135)

On Thu, Nov 26, 2020 at 10:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I think what you need to do to reproduce this is to follow the
snapshot machinery in SnapBuildFindSnapshot. Basically, first, start a
transaction (say transaction-id is 500) and do some operations but
don't commit. Here, if you create a slot (via subscription or
otherwise), it will wait for 500 to complete and make the state as
SNAPBUILD_BUILDING_SNAPSHOT. Here, you can commit 500 and then having
debugger in that state, start another transaction (say 501), do some
operations but don't commit. Next time when you reach this function,
it will change the state to SNAPBUILD_FULL_SNAPSHOT and wait for 501,
now you can start another transaction (say 502) which you can prepare
but don't commit. Again start one more transaction 503, do some ops,
commit both 501 and 503. At this stage somehow we need to ensure that
XLOG_RUNNING_XACTS record. Then commit prepared 502. Now, I think you
should notice that the consistent point is reached after 502's prepare
and before its commit. Now, this is just a theoretical scenario, you
need something on these lines and probably a way to force
XLOG_RUNNING_XACTS WAL (probably via debugger or some other way) at
the right times to reproduce it.

Thanks for trying to build a test case for this, it is really helpful.

I tried the above steps, I was able to get the builder state to
SNAPBUILD_BUILDING_SNAPSHOT but was not able to get into the
SNAPBUILD_FULL_SNAPSHOT state.
Instead the code moves straight from SNAPBUILD_BUILDING_SNAPSHOT to
SNAPBUILD_CONSISTENT state.

In the function SnapBuildFindSnapshot, either the following check fails:

1327: TransactionIdPrecedesOrEquals(SnapBuildNextPhaseAt(builder),
running->oldestRunningXid))

because the SnapBuildNextPhaseAt (which is same as running->nextXid)
is higher than oldestRunningXid, or when the both are the same, then
it falls through into the below condition higher in the code

1247: if (running->oldestRunningXid == running->nextXid)

and then the builder moves straight into the SNAPBUILD_CONSISTENT
state. At no point will the nextXid be less than oldestRunningXid. In
my sessions, I commit multiple txns, hoping to bump
up oldestRunningXid, I do checkpoints, have made sure the
XLOG_RUNNING_XACTS are being inserted.,
but while iterating into SnapBuildFindSnapshot with a ,new
XLOG_RUNNING_XACTS:record, the oldestRunningXid is being incremented
at one xid at a time, which will eventually make it catch up
running->nextXid and reach a
SNAPBUILD_CONSISTENT state without entering the SNAPBUILD_FULL_SNAPSHOT state.

regards,
Ajin Cherian
Fujitsu Australia

#137Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#136)

On Fri, Nov 27, 2020 at 6:35 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Thu, Nov 26, 2020 at 10:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I think what you need to do to reproduce this is to follow the
snapshot machinery in SnapBuildFindSnapshot. Basically, first, start a
transaction (say transaction-id is 500) and do some operations but
don't commit. Here, if you create a slot (via subscription or
otherwise), it will wait for 500 to complete and make the state as
SNAPBUILD_BUILDING_SNAPSHOT. Here, you can commit 500 and then having
debugger in that state, start another transaction (say 501), do some
operations but don't commit. Next time when you reach this function,
it will change the state to SNAPBUILD_FULL_SNAPSHOT and wait for 501,
now you can start another transaction (say 502) which you can prepare
but don't commit. Again start one more transaction 503, do some ops,
commit both 501 and 503. At this stage somehow we need to ensure that
XLOG_RUNNING_XACTS record. Then commit prepared 502. Now, I think you
should notice that the consistent point is reached after 502's prepare
and before its commit. Now, this is just a theoretical scenario, you
need something on these lines and probably a way to force
XLOG_RUNNING_XACTS WAL (probably via debugger or some other way) at
the right times to reproduce it.

Thanks for trying to build a test case for this, it is really helpful.

I tried the above steps, I was able to get the builder state to
SNAPBUILD_BUILDING_SNAPSHOT but was not able to get into the
SNAPBUILD_FULL_SNAPSHOT state.
Instead the code moves straight from SNAPBUILD_BUILDING_SNAPSHOT to
SNAPBUILD_CONSISTENT state.

I see the code coverage report and it appears that part of the code
(get the snapshot machinery in SNAPBUILD_FULL_SNAPSHOT state) is
covered by existing tests [1]https://coverage.postgresql.org/src/backend/replication/logical/snapbuild.c.gcov.html. So, another idea you can try is to put
a break (say while (1)) in that part of code and run regression tests
(most probably the test_decoding or subscription tests should be
sufficient to hit). Then once you found which existing test covers
that, you can try to generate prepared transaction behavior as
mentioned above.

[1]: https://coverage.postgresql.org/src/backend/replication/logical/snapbuild.c.gcov.html

--
With Regards,
Amit Kapila.

#138Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#137)

On Sun, Nov 29, 2020 at 1:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Then once you found which existing test covers
that, you can try to generate prepared transaction behavior as
mentioned above.

I was able to find out the test case that exercises that code, it is
the ondisk_startup spec in test_decoding. Using that, I was able to
create the problem with the following setup:
Using 4 sessions (this could be optimized to 3, but just sharing what
I've tested):

s1(session 1):
begin;
postgres=# begin;
BEGIN
postgres=*# SELECT pg_current_xact_id();
pg_current_xact_id
--------------------
546
(1 row)
--------------------the above commands leave a transaction running
s2:
CREATE TABLE do_write(id serial primary key);
SELECT 'init' FROM
pg_create_logical_replication_slot('isolation_slot', 'test_decoding');

---------------------this will hang because of 546 txn is pending

s3:
postgres=# begin;
BEGIN
postgres=*# SELECT pg_current_xact_id();
pg_current_xact_id
--------------------
547
(1 row)
-------------------------------- leave another txn pending---

s1:
postgres=*# ALTER TABLE do_write ADD COLUMN addedbys2 int;
ALTER TABLE
postgres=*# commit;
------------------------------commit the first txn; this will cause
state to move to SNAPBUILD_FULL_SNAPSHOT state
2020-11-30 03:31:07.354 EST [16312] LOG: logical decoding found
initial consistent point at 0/1730A18
2020-11-30 03:31:07.354 EST [16312] DETAIL: Waiting for transactions
(approximately 1) older than 553 to end.

s4:
postgres=# begin;
BEGIN
postgres=*# INSERT INTO do_write DEFAULT VALUES;
INSERT 0 1
postgres=*# prepare transaction 'test1';
PREPARE TRANSACTION
-------------- leave this transaction prepared

s3:
postgres=*# commit;
COMMIT
----------------- this will cause s2 call to return and a consistent
point has been reached.
2020-11-30 03:31:34.200 EST [16312] LOG: logical decoding found
consistent point at 0/1730D58

s4:
commit prepared 'test1';

s2:
postgres=# SELECT * FROM pg_logical_slot_get_changes('isolation_slot',
NULL, NULL, 'two-phase-commit', '1', 'include-xids', '0',
'skip-empty-xacts', '1');
lsn | xid | data
-----------+-----+-------------------------
0/1730FC8 | 553 | COMMIT PREPARED 'test1'
(1 row)

In pg_logical_slot_get_changes() we see only the Commit Prepared but
no insert and no prepare command. I debugged this and I see that in
DecodePrepare, the
prepare is skipped because the prepare lsn is prior to the
start_decoding_at point and is skipped in SnapBuildXactNeedsSkip. So,
the reason for skipping
the PREPARE is similar to the reason why it would have been skipped on
a restart after a previous decode run.

One possible fix would be similar to what you suggested, in
DecodePrepare , add the check DecodingContextReady(ctx), which if
false would indicate that the
PREPARE was prior to a consistent snapshot and if so, set a flag value
in txn accordingly (say RBTXN_PREPARE_NOT_DECODED?), and if this flag
is detected
while handling the COMMIT PREPARED, then handle it like you would
handle a COMMIT. This would ensure that all the changes of the
transaction are sent out
and at the same time, the subscriber side does not need to try and
handle a prepared transaction that does not exist on its side.

Let me know what you think of this?

regards,
Ajin Cherian
Fujitsu Australia

#139Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#138)

On Mon, Nov 30, 2020 at 2:36 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Sun, Nov 29, 2020 at 1:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Then once you found which existing test covers
that, you can try to generate prepared transaction behavior as
mentioned above.

I was able to find out the test case that exercises that code, it is
the ondisk_startup spec in test_decoding. Using that, I was able to
create the problem with the following setup:
Using 4 sessions (this could be optimized to 3, but just sharing what
I've tested):

s1(session 1):
begin;
postgres=# begin;
BEGIN
postgres=*# SELECT pg_current_xact_id();
pg_current_xact_id
--------------------
546
(1 row)
--------------------the above commands leave a transaction running
s2:
CREATE TABLE do_write(id serial primary key);
SELECT 'init' FROM
pg_create_logical_replication_slot('isolation_slot', 'test_decoding');

---------------------this will hang because of 546 txn is pending

s3:
postgres=# begin;
BEGIN
postgres=*# SELECT pg_current_xact_id();
pg_current_xact_id
--------------------
547
(1 row)
-------------------------------- leave another txn pending---

s1:
postgres=*# ALTER TABLE do_write ADD COLUMN addedbys2 int;
ALTER TABLE
postgres=*# commit;
------------------------------commit the first txn; this will cause
state to move to SNAPBUILD_FULL_SNAPSHOT state
2020-11-30 03:31:07.354 EST [16312] LOG: logical decoding found
initial consistent point at 0/1730A18
2020-11-30 03:31:07.354 EST [16312] DETAIL: Waiting for transactions
(approximately 1) older than 553 to end.

s4:
postgres=# begin;
BEGIN
postgres=*# INSERT INTO do_write DEFAULT VALUES;
INSERT 0 1
postgres=*# prepare transaction 'test1';
PREPARE TRANSACTION
-------------- leave this transaction prepared

s3:
postgres=*# commit;
COMMIT
----------------- this will cause s2 call to return and a consistent
point has been reached.
2020-11-30 03:31:34.200 EST [16312] LOG: logical decoding found
consistent point at 0/1730D58

s4:
commit prepared 'test1';

s2:
postgres=# SELECT * FROM pg_logical_slot_get_changes('isolation_slot',
NULL, NULL, 'two-phase-commit', '1', 'include-xids', '0',
'skip-empty-xacts', '1');
lsn | xid | data
-----------+-----+-------------------------
0/1730FC8 | 553 | COMMIT PREPARED 'test1'
(1 row)

In pg_logical_slot_get_changes() we see only the Commit Prepared but
no insert and no prepare command. I debugged this and I see that in
DecodePrepare, the
prepare is skipped because the prepare lsn is prior to the
start_decoding_at point and is skipped in SnapBuildXactNeedsSkip.

So what caused it to skip due to start_decoding_at? Because the commit
where the snapshot became consistent is after Prepare. Does it happen
due to the below code in SnapBuildFindSnapshot() where we bump
start_decoding_at.

{
...
if (running->oldestRunningXid == running->nextXid)
{
if (builder->start_decoding_at == InvalidXLogRecPtr ||
builder->start_decoding_at <= lsn)
/* can decode everything after this */
builder->start_decoding_at = lsn + 1;

So,
the reason for skipping
the PREPARE is similar to the reason why it would have been skipped on
a restart after a previous decode run.

One possible fix would be similar to what you suggested, in
DecodePrepare , add the check DecodingContextReady(ctx), which if
false would indicate that the
PREPARE was prior to a consistent snapshot and if so, set a flag value
in txn accordingly

Sure, but you can see in your example above it got skipped due to
start_decoding_at not due to DecodingContextReady. So, the problem as
mentioned by me previously was how we distinguish those cases because
it can skip due to start_decoding_at during restart as well when we
would have already sent the prepare to the subscriber.

One idea could be that the subscriber skips the transaction if it sees
the transaction is already prepared. We already skip changes in apply
worker (subscriber) if they are performed via tablesync worker, see
should_apply_changes_for_rel. This will be a different thing but I am
trying to indicate that something similar is already done in
subscriber. I am not sure if we can detect this in publisher, if so,
that would be also worth considering and might be better.

Thoughts?

--
With Regards,
Amit Kapila.

#140Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#139)

On Tue, Dec 1, 2020 at 12:46 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

So what caused it to skip due to start_decoding_at? Because the commit
where the snapshot became consistent is after Prepare. Does it happen
due to the below code in SnapBuildFindSnapshot() where we bump
start_decoding_at.

{
...
if (running->oldestRunningXid == running->nextXid)
{
if (builder->start_decoding_at == InvalidXLogRecPtr ||
builder->start_decoding_at <= lsn)
/* can decode everything after this */
builder->start_decoding_at = lsn + 1;

I think the reason is that in the function
DecodingContextFindStartpoint(), the code
loops till it finds the consistent snapshot. Then once consistent
snapshot is found, it sets
slot->data.confirmed_flush = ctx->reader->EndRecPtr; This will be used
as the start_decoding_at when the slot is
restarted for decoding.

Sure, but you can see in your example above it got skipped due to
start_decoding_at not due to DecodingContextReady. So, the problem as
mentioned by me previously was how we distinguish those cases because
it can skip due to start_decoding_at during restart as well when we
would have already sent the prepare to the subscriber.

The distinguishing factor is that at restart, the Prepare does satisfy
DecodingContextReady (because the snapshot is consistent then).
In both cases, the prepare is prior to start_decoding_at, but when the
prepare is before a consistent point,
it does not satisfy DecodingContextReady. Which is why I suggested
using the check DecodingContextReady to mark the prepare as 'Not
decoded".

regards,
Ajin Cherian
Fujitsu Australia

#141Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#140)

On Tue, Dec 1, 2020 at 7:55 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Tue, Dec 1, 2020 at 12:46 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

Sure, but you can see in your example above it got skipped due to
start_decoding_at not due to DecodingContextReady. So, the problem as
mentioned by me previously was how we distinguish those cases because
it can skip due to start_decoding_at during restart as well when we
would have already sent the prepare to the subscriber.

The distinguishing factor is that at restart, the Prepare does satisfy
DecodingContextReady (because the snapshot is consistent then).
In both cases, the prepare is prior to start_decoding_at, but when the
prepare is before a consistent point,
it does not satisfy DecodingContextReady.

I think it won't be true when we reuse some already serialized
snapshot from some other slot. It is possible that we wouldn't have
encountered such a serialized snapshot while creating a slot but later
during replication, we might use it because by that time some other
slot has serialized the one at that point.

--
With Regards,
Amit Kapila.

#142Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#139)

On Mon, Nov 30, 2020 at 7:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Nov 30, 2020 at 2:36 PM Ajin Cherian <itsajin@gmail.com> wrote:

Sure, but you can see in your example above it got skipped due to
start_decoding_at not due to DecodingContextReady. So, the problem as
mentioned by me previously was how we distinguish those cases because
it can skip due to start_decoding_at during restart as well when we
would have already sent the prepare to the subscriber.

One idea could be that the subscriber skips the transaction if it sees
the transaction is already prepared.

To skip it, we need to send GID in begin message and then on
subscriber-side, check if the prepared xact already exists, if so then
set a flag. The flag needs to be set in begin/start_stream and reset
in stop_stream/commit/abort. Using the flag, we can skip the entire
contents of the prepared xact. In ReorderFuffer-side also, we need to
get and set GID in txn even when we skip it because we need to send
the same at commit time. In this solution, we won't be able to send it
during normal start_stream because by that time we won't know GID and
I think that won't be required. Note that this is only required when
we skipped sending prepare, otherwise, we just need to send
Commit-Prepared at commit time.

Another way to solve this problem via publisher-side is to maintain in
some file at slot level whether we have sent prepare for a particular
txn? Basically, after sending prepare, we need to update the slot
information on disk to indicate that the particular GID is sent (we
can probably store GID and LSN of Prepare). Then next time whenever we
have to skip prepare due to whatever reason, we can check the
existence of persistent information on disk for that GID, if it exists
then we need to send just Commit Prepared, otherwise, the entire
transaction. We can remove this information during or after
CheckPointSnapBuild, basically, we can remove the information of all
GID's that are after cutoff LSN computed via
ReplicationSlotsComputeLogicalRestartLSN. Now, we can even think of
removing this information after Commit Prepared but not sure if that
is correct because we can't lose this information unless
start_decoding_at (or restart_lsn) is moved past the commit lsn

Now, to persist this information, there could be multiple
possibilities (a) maintain the flexible array for GID's at the end of
ReplicationSlotPersistentData, (b) have a separate state file per-slot
for prepared xacts, (c) have a separate state file for each prepared
xact per-slot.

With (a) during upgrade from the previous version there could be a
problem because the previous data won't match new data but I am not
sure if we maintain slots info intact after upgrade. I think (c) would
be simplest but OTOH, having many such files (in case there are more
prepared xacts) per-slot might not be a good idea.

One more thing that needs to be thought about is when we are sending
the entire xact at commit time whether we will send prepare
separately? Because, if we don't send it separately, then later
allowing the PREPARE on the master to wait for prepare via subscribers
won't be possible?

Thoughts?

--
With Regards,
Amit Kapila.

#143Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#142)

On Tue, Dec 1, 2020 at 6:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

One idea could be that the subscriber skips the transaction if it sees
the transaction is already prepared.

To skip it, we need to send GID in begin message and then on
subscriber-side, check if the prepared xact already exists, if so then
set a flag. The flag needs to be set in begin/start_stream and reset
in stop_stream/commit/abort. Using the flag, we can skip the entire
contents of the prepared xact. In ReorderFuffer-side also, we need to
get and set GID in txn even when we skip it because we need to send
the same at commit time. In this solution, we won't be able to send it
during normal start_stream because by that time we won't know GID and
I think that won't be required. Note that this is only required when
we skipped sending prepare, otherwise, we just need to send
Commit-Prepared at commit time.

After going through both the solutions, I think the above one is a better idea.
I also think, rather than change the protocol for the regular begin,
we could have
a special begin_prepare for prepared txns specifically. This way we won't affect
non-prepared transactions. We will need to add in a begin_prepare callback
as well, which has the gid as one of the parameters. Other than this,
in ReorderBufferFinishPrepared, if the txn hasn't already been
prepared (because it was skipped in DecodePrepare), then we set
prepared flag and call
ReorderBufferReplay before calling commit-prepared callback.

At the subscriber side, on receipt of the special begin-prepare, we
first check if the gid is of an already
prepared txn, if yes, then we set a flag such that the rest of the
transaction are not applied but skipped, If it's not
a gid that has already been prepared, then continue to apply changes
as you would otherwise. So, this is the
approach I'd pick. The drawback is probably that we send extra
prepares after a restart, which might be quite common
while using test_decoding but not so common when using the pgoutput
and real world scenarios of pub/sub.

The second approach is a bit more involved requiring file creation and
manipulation as well as the overhead of having to
write to a file on every prepare which might be a performance bottleneck.

Let me know what you think.

regards,
Ajin Cherian
Fujitsu Australia

#144Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#143)

On Wed, Dec 2, 2020 at 12:47 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Tue, Dec 1, 2020 at 6:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

One idea could be that the subscriber skips the transaction if it sees
the transaction is already prepared.

To skip it, we need to send GID in begin message and then on
subscriber-side, check if the prepared xact already exists, if so then
set a flag. The flag needs to be set in begin/start_stream and reset
in stop_stream/commit/abort. Using the flag, we can skip the entire
contents of the prepared xact. In ReorderFuffer-side also, we need to
get and set GID in txn even when we skip it because we need to send
the same at commit time. In this solution, we won't be able to send it
during normal start_stream because by that time we won't know GID and
I think that won't be required. Note that this is only required when
we skipped sending prepare, otherwise, we just need to send
Commit-Prepared at commit time.

After going through both the solutions, I think the above one is a better idea.
I also think, rather than change the protocol for the regular begin,
we could have
a special begin_prepare for prepared txns specifically. This way we won't affect
non-prepared transactions. We will need to add in a begin_prepare callback
as well, which has the gid as one of the parameters. Other than this,
in ReorderBufferFinishPrepared, if the txn hasn't already been
prepared (because it was skipped in DecodePrepare), then we set
prepared flag and call
ReorderBufferReplay before calling commit-prepared callback.

At the subscriber side, on receipt of the special begin-prepare, we
first check if the gid is of an already
prepared txn, if yes, then we set a flag such that the rest of the
transaction are not applied but skipped, If it's not
a gid that has already been prepared, then continue to apply changes
as you would otherwise.

The above sketch sounds good to me and additionally you might want to
add Asserts in streaming APIs on the subscriber-side to ensure that we
should never reach the already prepared case there. We should never
need to stream the changes when we are skipping prepare either because
the snapshot was not consistent by that time or we have already sent
those changes before restart.

So, this is the
approach I'd pick. The drawback is probably that we send extra
prepares after a restart, which might be quite common
while using test_decoding but not so common when using the pgoutput
and real world scenarios of pub/sub.

The restarts would be rare. It depends on how one uses test_decoding
module, this is primarily for testing and if you write a test in way
that it tries to perform wal decoding again and again for the same WAL
(aka simulating restarts) then probably you would see it again but
otherwise, one shouldn't see it.

--
With Regards,
Amit Kapila.

#145Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#144)
9 attachment(s)

I have rebased the v28 patch set (made necessary due to recent commit [1]https://github.com/postgres/postgres/commit/0926e96c493443644ba8e96b5d96d013a9ffaf64)
[1]: https://github.com/postgres/postgres/commit/0926e96c493443644ba8e96b5d96d013a9ffaf64

And at the same time I have added patch 0009 to this set - This is for
the new SUBSCRIPTION option "two_phase" (0009 is still WIP but
stable).

PSA new patch set with version bumped to v29.

---

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v29-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer.patchapplication/octet-stream; name=v29-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer.patch
v29-0006-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v29-0006-Support-2PC-txn-pgoutput.patch
v29-0004-Support-2PC-txn-tests-for-concurrent-aborts.patchapplication/octet-stream; name=v29-0004-Support-2PC-txn-tests-for-concurrent-aborts.patch
v29-0005-Support-2PC-txn-spoolfile.patchapplication/octet-stream; name=v29-0005-Support-2PC-txn-spoolfile.patch
v29-0003-Support-2PC-txn-tests-for-test_decoding.patchapplication/octet-stream; name=v29-0003-Support-2PC-txn-tests-for-test_decoding.patch
v29-0007-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v29-0007-Support-2PC-txn-subscriber-tests.patch
v29-0008-Support-2PC-documentation.patchapplication/octet-stream; name=v29-0008-Support-2PC-documentation.patch
v29-0009-Support-2PC-txn-WIP-Subscription-option.patchapplication/octet-stream; name=v29-0009-Support-2PC-txn-WIP-Subscription-option.patch
v29-0001-Extend-the-output-plugin-API-to-allow-decoding-p.patchapplication/octet-stream; name=v29-0001-Extend-the-output-plugin-API-to-allow-decoding-p.patch
#146Masahiko Sawada
Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Peter Smith (#145)

On Wed, Dec 2, 2020 at 8:24 PM Peter Smith <smithpb2250@gmail.com> wrote:

I have rebased the v28 patch set (made necessary due to recent commit [1])
[1] https://github.com/postgres/postgres/commit/0926e96c493443644ba8e96b5d96d013a9ffaf64

And at the same time I have added patch 0009 to this set - This is for
the new SUBSCRIPTION option "two_phase" (0009 is still WIP but
stable).

PSA new patch set with version bumped to v29.

Thank you for updating the patch!

While looking at the patch set I found that the tests in
src/test/subscription don't work with this patch. I got the following
error:

2020-12-03 15:18:12.666 JST [44771] tap_sub ERROR: unrecognized
pgoutput option: two_phase
2020-12-03 15:18:12.666 JST [44771] tap_sub CONTEXT: slot "tap_sub",
output plugin "pgoutput", in the startup callback
2020-12-03 15:18:12.666 JST [44771] tap_sub STATEMENT:
START_REPLICATION SLOT "tap_sub" LOGICAL 0/0 (proto_version '2',
two_phase 'on', publication_names '"tap_pub","tap_pub_ins_only"')

In v29-0009 patch "two_phase" option is added on the subscription side
(i.g., libpqwalreceiver) but it seems not on the publisher side
(pgoutput).

Regards,

--
Masahiko Sawada
EnterpriseDB: https://www.enterprisedb.com/

#147Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Masahiko Sawada (#146)

On Thu, Dec 3, 2020 at 5:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

While looking at the patch set I found that the tests in
src/test/subscription don't work with this patch. I got the following
error:

2020-12-03 15:18:12.666 JST [44771] tap_sub ERROR: unrecognized
pgoutput option: two_phase
2020-12-03 15:18:12.666 JST [44771] tap_sub CONTEXT: slot "tap_sub",
output plugin "pgoutput", in the startup callback
2020-12-03 15:18:12.666 JST [44771] tap_sub STATEMENT:
START_REPLICATION SLOT "tap_sub" LOGICAL 0/0 (proto_version '2',
two_phase 'on', publication_names '"tap_pub","tap_pub_ins_only"')

In v29-0009 patch "two_phase" option is added on the subscription side
(i.g., libpqwalreceiver) but it seems not on the publisher side
(pgoutput).

The v29-0009 patch is still a WIP for a new SUBSCRIPTION "two_phase"
option so it is not yet fully implemented. I did run following prior
to upload but somehow did not see those failures yesterday:
cd src/test/subscription
make check

Anyway, as 0009 is the last of the set please just git apply
--reverse that one if it is causing a problem.

Sorry for any inconvenience. I will add the missing functionality to
0009 as soon as I can.

Kind Regards,
Peter Smith.
Fujitsu Australia

#148Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#147)
1 attachment(s)

On Thu, Dec 3, 2020 at 6:21 PM Peter Smith <smithpb2250@gmail.com> wrote:

Sorry for any inconvenience. I will add the missing functionality to
0009 as soon as I can.

PSA a **replacement** patch for the previous v29-0009.

This should correct the recently reported trouble [1]= /messages/by-id/CAD21AoBnZ6dYffVjOCdSvSohR_1ZNedqmb=6P9w_H6W0bK1s6g@mail.gmail.com
[1]: = /messages/by-id/CAD21AoBnZ6dYffVjOCdSvSohR_1ZNedqmb=6P9w_H6W0bK1s6g@mail.gmail.com

I observed after this patch:
make check is all OK.
cd src/test/subscription, then make check is all OK.

~

Note that the tablesync worker's (temporary) slot always uses
two_phase *off*, regardless of the user setting.

e.g.

CREATE SUBSCRIPTION tap_sub CONNECTION 'host=localhost dbname=test_pub
application_name=tap_sub' PUBLICATION tap_pub WITH (streaming = on,
two_phase = on);

will show in the logs that only the apply worker slot enabled the two_phase.

STATEMENT: START_REPLICATION SLOT "tap_sub" LOGICAL 0/0
(proto_version '2', streaming 'on', two_phase 'on', publication_names
'"tap_pub"')
STATEMENT: START_REPLICATION SLOT "tap_sub_16395_sync_16385" LOGICAL
0/16076D8 (proto_version '2', streaming 'on', publication_names
'"tap_pub"')

---

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v29-0009-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v29-0009-Support-2PC-txn-Subscription-option.patch
#149Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#142)
9 attachment(s)

On Tue, Dec 1, 2020 at 6:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

To skip it, we need to send GID in begin message and then on
subscriber-side, check if the prepared xact already exists, if so then
set a flag. The flag needs to be set in begin/start_stream and reset
in stop_stream/commit/abort. Using the flag, we can skip the entire
contents of the prepared xact. In ReorderFuffer-side also, we need to
get and set GID in txn even when we skip it because we need to send
the same at commit time. In this solution, we won't be able to send it
during normal start_stream because by that time we won't know GID and
I think that won't be required. Note that this is only required when
we skipped sending prepare, otherwise, we just need to send
Commit-Prepared at commit time.

I have implemented these changes and tested the fix using the test
setup I had shared above and it seems to be working fine.
I have also tested restarts that simulate duplicate prepares being
sent by the publisher and verified that it is handled correctly by the
subscriber.
Do have a look at the changes and let me know if you have any comments.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v30-0003-Support-2PC-txn-tests-for-test_decoding.patchapplication/octet-stream; name=v30-0003-Support-2PC-txn-tests-for-test_decoding.patch
v30-0005-Support-2PC-txn-spoolfile.patchapplication/octet-stream; name=v30-0005-Support-2PC-txn-spoolfile.patch
v30-0004-Support-2PC-txn-tests-for-concurrent-aborts.patchapplication/octet-stream; name=v30-0004-Support-2PC-txn-tests-for-concurrent-aborts.patch
v30-0001-Extend-the-output-plugin-API-to-allow-decoding-p.patchapplication/octet-stream; name=v30-0001-Extend-the-output-plugin-API-to-allow-decoding-p.patch
v30-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer.patchapplication/octet-stream; name=v30-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer.patch
v30-0006-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v30-0006-Support-2PC-txn-pgoutput.patch
v30-0008-Support-2PC-documentation.patchapplication/octet-stream; name=v30-0008-Support-2PC-documentation.patch
v30-0009-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v30-0009-Support-2PC-txn-Subscription-option.patch
v30-0007-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v30-0007-Support-2PC-txn-subscriber-tests.patch
#150Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#149)
9 attachment(s)

On Tue, Dec 8, 2020 at 2:01 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Tue, Dec 1, 2020 at 6:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

To skip it, we need to send GID in begin message and then on
subscriber-side, check if the prepared xact already exists, if so then
set a flag. The flag needs to be set in begin/start_stream and reset
in stop_stream/commit/abort. Using the flag, we can skip the entire
contents of the prepared xact. In ReorderFuffer-side also, we need to
get and set GID in txn even when we skip it because we need to send
the same at commit time. In this solution, we won't be able to send it
during normal start_stream because by that time we won't know GID and
I think that won't be required. Note that this is only required when
we skipped sending prepare, otherwise, we just need to send
Commit-Prepared at commit time.

I have implemented these changes and tested the fix using the test
setup I had shared above and it seems to be working fine.
I have also tested restarts that simulate duplicate prepares being
sent by the publisher and verified that it is handled correctly by the
subscriber.

This implementation has a flaw in that it has used commit_lsn for the
prepare when we are sending prepare just before commit prepared. We
can't send the commit LSN with prepare because if the subscriber
crashes after prepare then it would update
replorigin_session_origin_lsn with that commit_lsn. Now, after the
restart, because we will use that LSN to start decoding, the Commit
Prepared will get skipped. To fix this, we need to remember the
prepare LSN and other information even when we skip prepare and then
use it while sending the prepare during commit prepared.

Now, after fixing this, I discovered another issue which is that we
allow adding a new snapshot to prepared transactions via
SnapBuildDistributeNewCatalogSnapshot. We can only allow it to get
added to in-progress transactions. If you comment out the changes
added in SnapBuildDistributeNewCatalogSnapshot then you will notice
one test failure which indicates this problem. This problem was not
evident before the bug-fix in the previous paragraph because you were
using commit-lsn even for the prepare and newly added snapshot change
appears to be before the end_lsn.

Some other assorted changes in various patches:
v31-0001-Extend-the-output-plugin-API-to-allow-decoding-o
1. I have changed the filter_prepare API to match the signature with
FilterByOrigin. I don't see the need for ReorderBufferTxn or xid in
the API.
2. I have expanded the documentation of 'Begin Prepare Callback' to
explain how a user can use it to detect already prepared transactions
and in which scenarios that can happen.
3. Added a few comments in the code atop begin_prepare_cb_wrapper to
explain why we are adding this new API.
4. Move the check whether the filter_prepare callback is defined from
filter_prepare_cb_wrapper to caller. This is similar to how
FilterByOrigin works.
5. Fixed various whitespace and cosmetic issues.
6. Update commit message to include two of the newly added APIs

v31-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer
1. Changed the variable names and comments in DecodeXactOp.
2. A new API for FilterPrepare similar to FilterByOrigin and use that
instead of ReorderBufferPrepareNeedSkip.
3. In DecodeCommit, we need to update the reorderbuffer about the
surviving subtransactions for both ReorderBufferFinishPrepared and
ReorderBufferCommit because now both can process the transaction.
4. Because, now we need to remember the prepare info even when we skip
it, I have simplified ReorderBufferPrepare API by removing the extra
parameters as that information will be now available via
ReorderBufferTxn.
5. Updated comments at various places.

v31-0006-Support-2PC-txn-pgoutput
1. Added Asserts in streaming APIs on the subscriber-side to ensure
that we should never reach there for the already prepared transaction
case. We never need to stream the changes when we are skipping prepare
either because the snapshot was not consistent by that time or we have
already sent those changes before restart. Added the same Assert in
Begin and Commit routines because while skipping prepared txn, we must
not receive the changes of any other xact.
2.
+ /*
+ * Flags are determined from the state of the transaction. We know we
+ * always get PREPARE first and then [COMMIT|ROLLBACK] PREPARED, so if
+ * it's already marked as committed then it has to be COMMIT PREPARED (and
+ * likewise for abort / ROLLBACK PREPARED).
+ */
+ if (rbtxn_commit_prepared(txn))
+ flags = LOGICALREP_IS_COMMIT_PREPARED;
+ else if (rbtxn_rollback_prepared(txn))
+ flags = LOGICALREP_IS_ROLLBACK_PREPARED;
+ else
+ flags = LOGICALREP_IS_PREPARE;

I don't like clubbing three different operations under one message
LOGICAL_REP_MSG_PREPARE. It looks awkward to use new flags
RBTXN_COMMIT_PREPARED and RBTXN_ROLLBACK_PREPARED in ReordeBuffer so
that we can recognize these operations in corresponding callbacks. I
think setting any flag in ReorderBuffer should not dictate the
behavior in callbacks. Then also there are few things that are not
common to those APIs like the patch has an Assert to say that the txn
is marked with prepare flag for all three operations which I think is
not true for Rollback Prepared after the restart. We don't ensure to
set the Prepare flag if the Rollback Prepare happens after the
restart. Then, we have to introduce separate flags to distinguish
prepare/commit prepared/rollback prepared to distinguish multiple
operations sent as protocol messages. Also, all these operations are
mutually exclusive so it will be better to send separate messages for
each of these and I have changed it accordingly in the attached patch.

3. The patch has a duplicate code to send replication origins. I have
moved the common code to a separate function.

v31-0009-Support-2PC-txn-Subscription-option
1.
--- a/src/include/catalog/catversion.h
+++ b/src/include/catalog/catversion.h
@@ -53,6 +53,6 @@
  */
 /* yyyymmddN */
-#define CATALOG_VERSION_NO 202011251
+#define CATALOG_VERSION_NO 202011271

No need to change catversion as this gets changed frequently and that
leads to conflict in the patch. We can change it either in the final
version or normally committers take care of this. If you want to
remember it, maybe adding a line for it in the commit message should
be okay. For now, I have removed this from the patch.

--
With Regards,
Amit Kapila.

Attachments:

v31-0009-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v31-0009-Support-2PC-txn-Subscription-option.patch
v31-0001-Extend-the-output-plugin-API-to-allow-decoding-o.patchapplication/octet-stream; name=v31-0001-Extend-the-output-plugin-API-to-allow-decoding-o.patch
v31-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer.patchapplication/octet-stream; name=v31-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer.patch
v31-0003-Support-2PC-txn-tests-for-test_decoding.patchapplication/octet-stream; name=v31-0003-Support-2PC-txn-tests-for-test_decoding.patch
v31-0004-Support-2PC-txn-tests-for-concurrent-aborts.patchapplication/octet-stream; name=v31-0004-Support-2PC-txn-tests-for-concurrent-aborts.patch
v31-0005-Support-2PC-txn-spoolfile.patchapplication/octet-stream; name=v31-0005-Support-2PC-txn-spoolfile.patch
v31-0006-Support-2PC-txn-pgoutput.patchapplication/octet-stream; name=v31-0006-Support-2PC-txn-pgoutput.patch
v31-0007-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v31-0007-Support-2PC-txn-subscriber-tests.patch
v31-0008-Support-2PC-documentation.patchapplication/octet-stream; name=v31-0008-Support-2PC-documentation.patch
#151Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#150)

On Mon, Dec 14, 2020 at 2:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Today, I looked at one of the issues discussed earlier in this thread
[1]: /messages/by-id/CAMGcDxf83P5SGnGH52=_0wRP9pO6uRWCMRwAA0nxKtZvir2_vQ@mail.gmail.com
user explicitly locks the catalog relation (like Lock pg_class) or
perform Cluster on non-relmapped catalog relations (like Cluster
pg_trigger using pg_class_oid_index; and the user_table on which we
have performed any operation has a trigger) in the prepared xact. As
discussed previously, we don't have a problem when user tables are
exclusively locked because during decoding we don't acquire any lock
on those and in fact, we have a test case for the same in the patch.

In the previous discussion, most people seem to be of opinion that we
should document it in a category "don't do that", or prohibit to
prepare transactions that lock system tables in the exclusive mode as
any way that can block the entire system. The other possibility could
be that the plugin can allow enabling lock_timeout when it wants to
allow decoding of two-phase xacts and if the timeout occurs it tries
to fetch by disabling two-phase option provided by the patch.

I think it is better to document this as there is no realistic
scenario where it can happen. I also think separately (not as part of
this patch) we can investigate whether it is a good idea to prohibit
prepare for transactions that acquire exclusive locks on catalog
relations.

Thoughts?

[1]: /messages/by-id/CAMGcDxf83P5SGnGH52=_0wRP9pO6uRWCMRwAA0nxKtZvir2_vQ@mail.gmail.com

--
With Regards,
Amit Kapila.

#152Masahiko Sawada
Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#150)

On Mon, Dec 14, 2020 at 6:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Dec 8, 2020 at 2:01 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Tue, Dec 1, 2020 at 6:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

To skip it, we need to send GID in begin message and then on
subscriber-side, check if the prepared xact already exists, if so then
set a flag. The flag needs to be set in begin/start_stream and reset
in stop_stream/commit/abort. Using the flag, we can skip the entire
contents of the prepared xact. In ReorderFuffer-side also, we need to
get and set GID in txn even when we skip it because we need to send
the same at commit time. In this solution, we won't be able to send it
during normal start_stream because by that time we won't know GID and
I think that won't be required. Note that this is only required when
we skipped sending prepare, otherwise, we just need to send
Commit-Prepared at commit time.

I have implemented these changes and tested the fix using the test
setup I had shared above and it seems to be working fine.
I have also tested restarts that simulate duplicate prepares being
sent by the publisher and verified that it is handled correctly by the
subscriber.

This implementation has a flaw in that it has used commit_lsn for the
prepare when we are sending prepare just before commit prepared. We
can't send the commit LSN with prepare because if the subscriber
crashes after prepare then it would update
replorigin_session_origin_lsn with that commit_lsn. Now, after the
restart, because we will use that LSN to start decoding, the Commit
Prepared will get skipped. To fix this, we need to remember the
prepare LSN and other information even when we skip prepare and then
use it while sending the prepare during commit prepared.

Now, after fixing this, I discovered another issue which is that we
allow adding a new snapshot to prepared transactions via
SnapBuildDistributeNewCatalogSnapshot. We can only allow it to get
added to in-progress transactions. If you comment out the changes
added in SnapBuildDistributeNewCatalogSnapshot then you will notice
one test failure which indicates this problem. This problem was not
evident before the bug-fix in the previous paragraph because you were
using commit-lsn even for the prepare and newly added snapshot change
appears to be before the end_lsn.

Some other assorted changes in various patches:
v31-0001-Extend-the-output-plugin-API-to-allow-decoding-o
1. I have changed the filter_prepare API to match the signature with
FilterByOrigin. I don't see the need for ReorderBufferTxn or xid in
the API.
2. I have expanded the documentation of 'Begin Prepare Callback' to
explain how a user can use it to detect already prepared transactions
and in which scenarios that can happen.
3. Added a few comments in the code atop begin_prepare_cb_wrapper to
explain why we are adding this new API.
4. Move the check whether the filter_prepare callback is defined from
filter_prepare_cb_wrapper to caller. This is similar to how
FilterByOrigin works.
5. Fixed various whitespace and cosmetic issues.
6. Update commit message to include two of the newly added APIs

v31-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer
1. Changed the variable names and comments in DecodeXactOp.
2. A new API for FilterPrepare similar to FilterByOrigin and use that
instead of ReorderBufferPrepareNeedSkip.
3. In DecodeCommit, we need to update the reorderbuffer about the
surviving subtransactions for both ReorderBufferFinishPrepared and
ReorderBufferCommit because now both can process the transaction.
4. Because, now we need to remember the prepare info even when we skip
it, I have simplified ReorderBufferPrepare API by removing the extra
parameters as that information will be now available via
ReorderBufferTxn.
5. Updated comments at various places.

v31-0006-Support-2PC-txn-pgoutput
1. Added Asserts in streaming APIs on the subscriber-side to ensure
that we should never reach there for the already prepared transaction
case. We never need to stream the changes when we are skipping prepare
either because the snapshot was not consistent by that time or we have
already sent those changes before restart. Added the same Assert in
Begin and Commit routines because while skipping prepared txn, we must
not receive the changes of any other xact.
2.
+ /*
+ * Flags are determined from the state of the transaction. We know we
+ * always get PREPARE first and then [COMMIT|ROLLBACK] PREPARED, so if
+ * it's already marked as committed then it has to be COMMIT PREPARED (and
+ * likewise for abort / ROLLBACK PREPARED).
+ */
+ if (rbtxn_commit_prepared(txn))
+ flags = LOGICALREP_IS_COMMIT_PREPARED;
+ else if (rbtxn_rollback_prepared(txn))
+ flags = LOGICALREP_IS_ROLLBACK_PREPARED;
+ else
+ flags = LOGICALREP_IS_PREPARE;

I don't like clubbing three different operations under one message
LOGICAL_REP_MSG_PREPARE. It looks awkward to use new flags
RBTXN_COMMIT_PREPARED and RBTXN_ROLLBACK_PREPARED in ReordeBuffer so
that we can recognize these operations in corresponding callbacks. I
think setting any flag in ReorderBuffer should not dictate the
behavior in callbacks. Then also there are few things that are not
common to those APIs like the patch has an Assert to say that the txn
is marked with prepare flag for all three operations which I think is
not true for Rollback Prepared after the restart. We don't ensure to
set the Prepare flag if the Rollback Prepare happens after the
restart. Then, we have to introduce separate flags to distinguish
prepare/commit prepared/rollback prepared to distinguish multiple
operations sent as protocol messages. Also, all these operations are
mutually exclusive so it will be better to send separate messages for
each of these and I have changed it accordingly in the attached patch.

3. The patch has a duplicate code to send replication origins. I have
moved the common code to a separate function.

v31-0009-Support-2PC-txn-Subscription-option
1.
--- a/src/include/catalog/catversion.h
+++ b/src/include/catalog/catversion.h
@@ -53,6 +53,6 @@
*/
/* yyyymmddN */
-#define CATALOG_VERSION_NO 202011251
+#define CATALOG_VERSION_NO 202011271

No need to change catversion as this gets changed frequently and that
leads to conflict in the patch. We can change it either in the final
version or normally committers take care of this. If you want to
remember it, maybe adding a line for it in the commit message should
be okay. For now, I have removed this from the patch.

Thank you for updating the patch. I have two questions:

-----
@@ -239,6 +239,19 @@ CREATE SUBSCRIPTION <replaceable
class="parameter">subscription_name</replaceabl
          </para>
         </listitem>
        </varlistentry>
+       <varlistentry>
+        <term><literal>two_phase</literal> (<type>boolean</type>)</term>
+        <listitem>
+         <para>
+          Specifies whether two-phase commit is enabled for this subscription.
+          The default is <literal>false</literal>.
+          When two-phase commit is enabled then the decoded
transactions are sent
+          to the subscriber on the PREPARE TRANSACTION. When
two-phase commit is not
+          enabled then PREPARE TRANSACTION and COMMIT/ROLLBACK PREPARED are not
+          decoded on the publisher.
+         </para>
+        </listitem>
+       </varlistentry>

The user will need to specify the 'two_phase’ option on CREATE
SUBSCRIPTION. It would mean the user will need to control what data is
streamed both on publication side for INSERT/UPDATE/DELETE/TRUNCATE
and on subscriber side for PREPARE. Looking at the implementation of
the ’two_phase’ option of CREATE SUBSCRIPTION, it ultimately just
passes the ‘two_phase' option to the publisher. Why don’t we set it on
the publisher side? Also, I guess we can improve the description of
’two_phase’ option of CREATE SUBSCRIPTION in the doc by adding the
fact that when this option is not enabled the transaction prepared on
the publisher is decoded as a normal transaction:

------
+   if (LookupGXact(begin_data.gid))
+   {
+       /*
+        * If this gid has already been prepared then we dont want to apply
+        * this txn again. This can happen after restart where upstream can
+        * send the prepared transaction again. See
+        * ReorderBufferFinishPrepared. Don't update remote_final_lsn.
+        */
+       skip_prepared_txn = true;
+       return;
+   }

When PREPARE arrives at the subscriber node but there is the prepared
transaction with the same transaction identifier, the apply worker
skips the whole transaction. So if the users prepared a transaction
with the same identifier on the subscriber, the prepared transaction
that came from the publisher would be ignored without any messages. On
the other hand, if applying other operations such as HEAP_INSERT
conflicts (such as when violating the unique constraint) the apply
worker raises an ERROR and stops logical replication until the
conflict is resolved. IIUC since we can know that the prepared
transaction came from the same publisher again by checking origin_lsn
in TwoPhaseFileHeader I guess we can skip the PREPARE message only
when the existing prepared transaction has the same LSN and the same
identifier. To be exact, it’s still possible that the subscriber gets
two PREPARE messages having the same LSN and name from two different
publishers but it’s unlikely happen in practice.

Regards,

--
Masahiko Sawada
EnterpriseDB: https://www.enterprisedb.com/

#153Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#152)

On Wed, Dec 16, 2020 at 1:04 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

Thank you for updating the patch. I have two questions:

-----
@@ -239,6 +239,19 @@ CREATE SUBSCRIPTION <replaceable
class="parameter">subscription_name</replaceabl
</para>
</listitem>
</varlistentry>
+       <varlistentry>
+        <term><literal>two_phase</literal> (<type>boolean</type>)</term>
+        <listitem>
+         <para>
+          Specifies whether two-phase commit is enabled for this subscription.
+          The default is <literal>false</literal>.
+          When two-phase commit is enabled then the decoded
transactions are sent
+          to the subscriber on the PREPARE TRANSACTION. When
two-phase commit is not
+          enabled then PREPARE TRANSACTION and COMMIT/ROLLBACK PREPARED are not
+          decoded on the publisher.
+         </para>
+        </listitem>
+       </varlistentry>

The user will need to specify the 'two_phase’ option on CREATE
SUBSCRIPTION. It would mean the user will need to control what data is
streamed both on publication side for INSERT/UPDATE/DELETE/TRUNCATE
and on subscriber side for PREPARE. Looking at the implementation of
the ’two_phase’ option of CREATE SUBSCRIPTION, it ultimately just
passes the ‘two_phase' option to the publisher. Why don’t we set it on
the publisher side?

There could be multiple subscriptions for the same publication, some
want to decode the transaction at prepare time and others might want
to decode at commit time only. Also, one subscription could subscribe
to multiple publications, so not sure if it is even feasible to set at
publication level (consider one txn has changes belonging to multiple
publications). This option controls how the data is streamed from a
publication similar to other options like 'streaming'. Why do you
think this should be any different?

Also, I guess we can improve the description of
’two_phase’ option of CREATE SUBSCRIPTION in the doc by adding the
fact that when this option is not enabled the transaction prepared on
the publisher is decoded as a normal transaction:

Sounds reasonable.

------
+   if (LookupGXact(begin_data.gid))
+   {
+       /*
+        * If this gid has already been prepared then we dont want to apply
+        * this txn again. This can happen after restart where upstream can
+        * send the prepared transaction again. See
+        * ReorderBufferFinishPrepared. Don't update remote_final_lsn.
+        */
+       skip_prepared_txn = true;
+       return;
+   }

When PREPARE arrives at the subscriber node but there is the prepared
transaction with the same transaction identifier, the apply worker
skips the whole transaction. So if the users prepared a transaction
with the same identifier on the subscriber, the prepared transaction
that came from the publisher would be ignored without any messages. On
the other hand, if applying other operations such as HEAP_INSERT
conflicts (such as when violating the unique constraint) the apply
worker raises an ERROR and stops logical replication until the
conflict is resolved. IIUC since we can know that the prepared
transaction came from the same publisher again by checking origin_lsn
in TwoPhaseFileHeader I guess we can skip the PREPARE message only
when the existing prepared transaction has the same LSN and the same
identifier. To be exact, it’s still possible that the subscriber gets
two PREPARE messages having the same LSN and name from two different
publishers but it’s unlikely happen in practice.

The idea sounds reasonable. I'll try and see if this works.

Thanks.

--
With Regards,
Amit Kapila.

#154Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#150)
v31-0009-Support-2PC-txn-Subscription-option
1.
--- a/src/include/catalog/catversion.h
+++ b/src/include/catalog/catversion.h
@@ -53,6 +53,6 @@
*/
/* yyyymmddN */
-#define CATALOG_VERSION_NO 202011251
+#define CATALOG_VERSION_NO 202011271

No need to change catversion as this gets changed frequently and that
leads to conflict in the patch. We can change it either in the final
version or normally committers take care of this. If you want to
remember it, maybe adding a line for it in the commit message should
be okay. For now, I have removed this from the patch.

--
With Regards,
Amit Kapila.

I have reviewed the changes, did not have any new comments.
While testing, I found an issue in this patch. During initialisation,
the pg_output is not initialised fully and the subscription parameters
are not all read. As a result, ctx->twophase could be
set to true , even if the subscription does not specify so. For this,
we need to make the following change in pgoutput.c:
pgoutput_startup(), similar to how streaming is handled.

/*
* This is replication start and not slot initialization.
*
* Parse and validate options passed by the client.
*/
if (!is_init)
{
:
:
}
else
{
/* Disable the streaming during the slot initialization mode. */
ctx->streaming = false;
+ ctx->twophase = false
}

regards,
Ajin

#155Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#154)

On Thu, Dec 17, 2020 at 7:02 AM Ajin Cherian <itsajin@gmail.com> wrote:

v31-0009-Support-2PC-txn-Subscription-option
1.
--- a/src/include/catalog/catversion.h
+++ b/src/include/catalog/catversion.h
@@ -53,6 +53,6 @@
*/
/* yyyymmddN */
-#define CATALOG_VERSION_NO 202011251
+#define CATALOG_VERSION_NO 202011271

No need to change catversion as this gets changed frequently and that
leads to conflict in the patch. We can change it either in the final
version or normally committers take care of this. If you want to
remember it, maybe adding a line for it in the commit message should
be okay. For now, I have removed this from the patch.

--
With Regards,
Amit Kapila.

I have reviewed the changes, did not have any new comments.
While testing, I found an issue in this patch. During initialisation,
the pg_output is not initialised fully and the subscription parameters
are not all read. As a result, ctx->twophase could be
set to true , even if the subscription does not specify so. For this,
we need to make the following change in pgoutput.c:
pgoutput_startup(), similar to how streaming is handled.

/*
* This is replication start and not slot initialization.
*
* Parse and validate options passed by the client.
*/
if (!is_init)
{
:
:
}
else
{
/* Disable the streaming during the slot initialization mode. */
ctx->streaming = false;
+ ctx->twophase = false
}

makes sense. I can take care of this in the next version where I am
planning to address Sawada-San's comments and few other clean up work.

--
With Regards,
Amit Kapila.

#156Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#155)

On Thu, Dec 17, 2020 at 9:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Dec 17, 2020 at 7:02 AM Ajin Cherian <itsajin@gmail.com> wrote:

I have reviewed the changes, did not have any new comments.
While testing, I found an issue in this patch. During initialisation,
the pg_output is not initialised fully and the subscription parameters
are not all read. As a result, ctx->twophase could be
set to true , even if the subscription does not specify so. For this,
we need to make the following change in pgoutput.c:
pgoutput_startup(), similar to how streaming is handled.

/*
* This is replication start and not slot initialization.
*
* Parse and validate options passed by the client.
*/
if (!is_init)
{
:
:
}
else
{
/* Disable the streaming during the slot initialization mode. */
ctx->streaming = false;
+ ctx->twophase = false
}

makes sense.

On again thinking about this, I think it is good to disable it during
slot initialization but will it create any problem because during slot
initialization we don't stream any xact and stop processing WAL as
soon as we reach CONSISTENT_STATE? Did you observe any problem with
this?

--
With Regards,
Amit Kapila.

#157Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#156)

On Thu, Dec 17, 2020 at 2:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On again thinking about this, I think it is good to disable it during
slot initialization but will it create any problem because during slot
initialization we don't stream any xact and stop processing WAL as
soon as we reach CONSISTENT_STATE? Did you observe any problem with
this?

Yes, it did not stream any xact during initialization but I was
surprised that the DecodePrepare code was invoked even though
I hadn't created the subscription with twophase enabled. No problem
was observed.

regards,
Ajin Cherian
Fujitsu Australia

#158Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Ajin Cherian (#157)
1 attachment(s)

Adding a test case that tests that when a consistent snapshot is
formed after a prepared transaction but before it has been COMMIT
PREPARED.
This test makes sure that in this case, the entire transaction is
decoded on a COMMIT PREPARED. This patch applies on top of v31.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v31-0010-Support-2PC-consistent-snapshot-isolation-tests.patchapplication/octet-stream; name=v31-0010-Support-2PC-consistent-snapshot-isolation-tests.patch
#159Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#151)

On Tue, Dec 15, 2020 at 11:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Dec 14, 2020 at 2:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Today, I looked at one of the issues discussed earlier in this thread
[1] which is that decoding can block (or deadlock can happen) when the
user explicitly locks the catalog relation (like Lock pg_class) or
perform Cluster on non-relmapped catalog relations (like Cluster
pg_trigger using pg_class_oid_index; and the user_table on which we
have performed any operation has a trigger) in the prepared xact. As
discussed previously, we don't have a problem when user tables are
exclusively locked because during decoding we don't acquire any lock
on those and in fact, we have a test case for the same in the patch.

Yes, and as described in that mail, the current code explicitly denies
preparation of a 2PC transaction.
under some circumstances:

postgres=# BEGIN;
postgres=# CLUSTER pg_class using pg_class_oid_index ;
postgres=# PREPARE TRANSACTION 'test_prepared_lock';
ERROR: cannot PREPARE a transaction that modified relation mapping

In the previous discussion, most people seem to be of opinion that we
should document it in a category "don't do that", or prohibit to
prepare transactions that lock system tables in the exclusive mode as
any way that can block the entire system. The other possibility could
be that the plugin can allow enabling lock_timeout when it wants to
allow decoding of two-phase xacts and if the timeout occurs it tries
to fetch by disabling two-phase option provided by the patch.

I think it is better to document this as there is no realistic
scenario where it can happen. I also think separately (not as part of
this patch) we can investigate whether it is a good idea to prohibit
prepare for transactions that acquire exclusive locks on catalog
relations.

Thoughts?

I agree with the documentation option. If we choose to disable
two-phase on timeout, we still need to decide what to
do with already prepared transactions.

regards,
Ajin Cherian
Fujitsu Australia

#160Masahiko Sawada
Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#153)

On Wed, Dec 16, 2020 at 6:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Dec 16, 2020 at 1:04 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

Thank you for updating the patch. I have two questions:

-----
@@ -239,6 +239,19 @@ CREATE SUBSCRIPTION <replaceable
class="parameter">subscription_name</replaceabl
</para>
</listitem>
</varlistentry>
+       <varlistentry>
+        <term><literal>two_phase</literal> (<type>boolean</type>)</term>
+        <listitem>
+         <para>
+          Specifies whether two-phase commit is enabled for this subscription.
+          The default is <literal>false</literal>.
+          When two-phase commit is enabled then the decoded
transactions are sent
+          to the subscriber on the PREPARE TRANSACTION. When
two-phase commit is not
+          enabled then PREPARE TRANSACTION and COMMIT/ROLLBACK PREPARED are not
+          decoded on the publisher.
+         </para>
+        </listitem>
+       </varlistentry>

The user will need to specify the 'two_phase’ option on CREATE
SUBSCRIPTION. It would mean the user will need to control what data is
streamed both on publication side for INSERT/UPDATE/DELETE/TRUNCATE
and on subscriber side for PREPARE. Looking at the implementation of
the ’two_phase’ option of CREATE SUBSCRIPTION, it ultimately just
passes the ‘two_phase' option to the publisher. Why don’t we set it on
the publisher side?

There could be multiple subscriptions for the same publication, some
want to decode the transaction at prepare time and others might want
to decode at commit time only. Also, one subscription could subscribe
to multiple publications, so not sure if it is even feasible to set at
publication level (consider one txn has changes belonging to multiple
publications). This option controls how the data is streamed from a
publication similar to other options like 'streaming'. Why do you
think this should be any different?

Oh, I was thinking that the option controls what data is streamed
similar to the 'publish' option. But I agreed with you. As you
mentioned, it might be a problem if a subscription subscribes multiple
publications setting different ’two_phase’ options. Also in terms of
changing this option while streaming changes, it’s better to control
it on the subscriber side.

Regards,

--
Masahiko Sawada
EnterpriseDB: https://www.enterprisedb.com/

#161Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#153)
9 attachment(s)

On Wed, Dec 16, 2020 at 2:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Dec 16, 2020 at 1:04 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

Also, I guess we can improve the description of
’two_phase’ option of CREATE SUBSCRIPTION in the doc by adding the
fact that when this option is not enabled the transaction prepared on
the publisher is decoded as a normal transaction:

Sounds reasonable.

Fixed in the attached.

------
+   if (LookupGXact(begin_data.gid))
+   {
+       /*
+        * If this gid has already been prepared then we dont want to apply
+        * this txn again. This can happen after restart where upstream can
+        * send the prepared transaction again. See
+        * ReorderBufferFinishPrepared. Don't update remote_final_lsn.
+        */
+       skip_prepared_txn = true;
+       return;
+   }

When PREPARE arrives at the subscriber node but there is the prepared
transaction with the same transaction identifier, the apply worker
skips the whole transaction. So if the users prepared a transaction
with the same identifier on the subscriber, the prepared transaction
that came from the publisher would be ignored without any messages. On
the other hand, if applying other operations such as HEAP_INSERT
conflicts (such as when violating the unique constraint) the apply
worker raises an ERROR and stops logical replication until the
conflict is resolved. IIUC since we can know that the prepared
transaction came from the same publisher again by checking origin_lsn
in TwoPhaseFileHeader I guess we can skip the PREPARE message only
when the existing prepared transaction has the same LSN and the same
identifier. To be exact, it’s still possible that the subscriber gets
two PREPARE messages having the same LSN and name from two different
publishers but it’s unlikely happen in practice.

The idea sounds reasonable. I'll try and see if this works.

I went ahead and used both origin_lsn and origin_timestamp to avoid
the possibility of a match of prepared xact from two different nodes.
We can handle this at begin_prepare and prepare time but we don't have
prepare_lsn and prepare_timestamp at rollback_prepared time, so what
do about that? As of now, I am using just GID at rollback_prepare time
and that would have been sufficient if we always receive prepare
before rollback because at prepare time we would have checked
origin_lsn and origin_timestamp. But it is possible that we get
rollback prepared without prepare in case if prepare happened before
consistent_snapshot is reached and rollback happens after that. For
commit-case, we do send prepare and all the data at commit time in
such a case but doing so for rollback case doesn't sound to be a good
idea. Another possibility is that we send prepare_lsn and prepare_time
in rollback_prepared API to deal with this. I am not sure if it is a
good idea to just rely on GID in rollback_prepare. What do you think?

I have done some additional changes in the patch-series.
1. Removed some declarations from
0001-Extend-the-output-plugin-API-to-allow-decoding-o which were not
required.
2. In 0002-Allow-decoding-at-prepare-time-in-ReorderBuffer,
+       txn->txn_flags |= RBTXN_PREPARE;
+       txn->gid = palloc(strlen(gid) + 1); /* trailing '\0' */
+       strcpy(txn->gid, gid);

Changed the above code to use pstrdup.

3. Merged the test-code from 0003 to 0002. I have yet to merge the
latest test case posted by Ajin[1]/messages/by-id/CAFPTHDYWj99+ysDjCH_z8BfN8hG2FoxtJg+EU8_MpJe5owXg4A@mail.gmail.com.
4. Removed the test for Rollback Prepared from two_phase_streaming.sql
because I think a similar test exists for non-streaming case in
two_phase.sql and it doesn't make sense to repeat it.
5. Comments update and minor cosmetic changes for test cases merged
from 0003 to 0002.

[1]: /messages/by-id/CAFPTHDYWj99+ysDjCH_z8BfN8hG2FoxtJg+EU8_MpJe5owXg4A@mail.gmail.com

--
With Regards,
Amit Kapila.

Attachments:

v32-0001-Extend-the-output-plugin-API-to-allow-decoding-o.patchapplication/octet-stream; name=v32-0001-Extend-the-output-plugin-API-to-allow-decoding-o.patch
v32-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer.patchapplication/octet-stream; name=v32-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer.patch
v32-0003-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v32-0003-Refactor-spool-file-logic-in-worker.c.patch
v32-0004-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v32-0004-Add-support-for-apply-at-prepare-time-to-built-i.patch
v32-0005-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v32-0005-Support-2PC-txn-subscriber-tests.patch
v32-0006-Support-2PC-documentation.patchapplication/octet-stream; name=v32-0006-Support-2PC-documentation.patch
v32-0007-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v32-0007-Support-2PC-txn-Subscription-option.patch
v32-0008-Support-2PC-consistent-snapshot-isolation-tests.patchapplication/octet-stream; name=v32-0008-Support-2PC-consistent-snapshot-isolation-tests.patch
v32-0009-Support-2PC-txn-tests-for-concurrent-aborts.patchapplication/octet-stream; name=v32-0009-Support-2PC-txn-tests-for-concurrent-aborts.patch
#162Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#161)

On Thu, Dec 17, 2020 at 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Dec 16, 2020 at 2:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Dec 16, 2020 at 1:04 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

Also, I guess we can improve the description of
’two_phase’ option of CREATE SUBSCRIPTION in the doc by adding the
fact that when this option is not enabled the transaction prepared on
the publisher is decoded as a normal transaction:

Sounds reasonable.

Fixed in the attached.

------
+   if (LookupGXact(begin_data.gid))
+   {
+       /*
+        * If this gid has already been prepared then we dont want to apply
+        * this txn again. This can happen after restart where upstream can
+        * send the prepared transaction again. See
+        * ReorderBufferFinishPrepared. Don't update remote_final_lsn.
+        */
+       skip_prepared_txn = true;
+       return;
+   }

When PREPARE arrives at the subscriber node but there is the prepared
transaction with the same transaction identifier, the apply worker
skips the whole transaction. So if the users prepared a transaction
with the same identifier on the subscriber, the prepared transaction
that came from the publisher would be ignored without any messages. On
the other hand, if applying other operations such as HEAP_INSERT
conflicts (such as when violating the unique constraint) the apply
worker raises an ERROR and stops logical replication until the
conflict is resolved. IIUC since we can know that the prepared
transaction came from the same publisher again by checking origin_lsn
in TwoPhaseFileHeader I guess we can skip the PREPARE message only
when the existing prepared transaction has the same LSN and the same
identifier. To be exact, it’s still possible that the subscriber gets
two PREPARE messages having the same LSN and name from two different
publishers but it’s unlikely happen in practice.

The idea sounds reasonable. I'll try and see if this works.

I went ahead and used both origin_lsn and origin_timestamp to avoid
the possibility of a match of prepared xact from two different nodes.
We can handle this at begin_prepare and prepare time but we don't have
prepare_lsn and prepare_timestamp at rollback_prepared time, so what
do about that? As of now, I am using just GID at rollback_prepare time
and that would have been sufficient if we always receive prepare
before rollback because at prepare time we would have checked
origin_lsn and origin_timestamp. But it is possible that we get
rollback prepared without prepare in case if prepare happened before
consistent_snapshot is reached and rollback happens after that.

Note that it is not easy to detect this case, otherwise, we would have
avoided sending rollback_prepared. See comments in
ReorderBufferFinishPrepared in patch
v32-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer.

--
With Regards,
Amit Kapila.

#163Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#157)

On Thu, Dec 17, 2020 at 9:30 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Thu, Dec 17, 2020 at 2:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On again thinking about this, I think it is good to disable it during
slot initialization but will it create any problem because during slot
initialization we don't stream any xact and stop processing WAL as
soon as we reach CONSISTENT_STATE? Did you observe any problem with
this?

Yes, it did not stream any xact during initialization but I was
surprised that the DecodePrepare code was invoked even though
I hadn't created the subscription with twophase enabled. No problem
was observed.

Fair enough, I have fixed this in the patch-series posted sometime back.

--
With Regards,
Amit Kapila.

#164Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#161)

On Thu, Dec 17, 2020 at 11:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I went ahead and used both origin_lsn and origin_timestamp to avoid
the possibility of a match of prepared xact from two different nodes.
We can handle this at begin_prepare and prepare time but we don't have
prepare_lsn and prepare_timestamp at rollback_prepared time, so what
do about that? As of now, I am using just GID at rollback_prepare time
and that would have been sufficient if we always receive prepare
before rollback because at prepare time we would have checked
origin_lsn and origin_timestamp. But it is possible that we get
rollback prepared without prepare in case if prepare happened before
consistent_snapshot is reached and rollback happens after that. For
commit-case, we do send prepare and all the data at commit time in
such a case but doing so for rollback case doesn't sound to be a good
idea. Another possibility is that we send prepare_lsn and prepare_time
in rollback_prepared API to deal with this. I am not sure if it is a
good idea to just rely on GID in rollback_prepare. What do you think?

Thinking about it for some time, my initial reaction was that the
distributed servers should maintain uniqueness of GIDs and re-checking
with LSNs is just overkill. But thinking some more, I realise that
since we allow reuse of GIDs, there could be a race condition where a
previously aborted/committed txn's GID was reused
which could lead to this. Yes, I think we could change
rollback_prepare to send out prepare_lsn and prepare_time as well,
just to be safe.

regards,
Ajin Cherian
Fujitsu Australia.

#165Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#164)
10 attachment(s)

On Fri, Dec 18, 2020 at 11:23 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Thu, Dec 17, 2020 at 11:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I went ahead and used both origin_lsn and origin_timestamp to avoid
the possibility of a match of prepared xact from two different nodes.
We can handle this at begin_prepare and prepare time but we don't have
prepare_lsn and prepare_timestamp at rollback_prepared time, so what
do about that? As of now, I am using just GID at rollback_prepare time
and that would have been sufficient if we always receive prepare
before rollback because at prepare time we would have checked
origin_lsn and origin_timestamp. But it is possible that we get
rollback prepared without prepare in case if prepare happened before
consistent_snapshot is reached and rollback happens after that. For
commit-case, we do send prepare and all the data at commit time in
such a case but doing so for rollback case doesn't sound to be a good
idea. Another possibility is that we send prepare_lsn and prepare_time
in rollback_prepared API to deal with this. I am not sure if it is a
good idea to just rely on GID in rollback_prepare. What do you think?

Thinking about it for some time, my initial reaction was that the
distributed servers should maintain uniqueness of GIDs and re-checking
with LSNs is just overkill. But thinking some more, I realise that
since we allow reuse of GIDs, there could be a race condition where a
previously aborted/committed txn's GID was reused
which could lead to this. Yes, I think we could change
rollback_prepare to send out prepare_lsn and prepare_time as well,
just to be safe.

Okay, I have changed the rollback_prepare API as discussed above and
accordingly handle the case where rollback is received without prepare
in apply_handle_rollback_prepared.

While testing for this case, I noticed that the tracking of
replication progress for aborts is not complete due to which after
restart we can again ask for the rollback lsn. This shouldn't be a
problem with the latest code because we will simply skip it when there
is no corresponding prepare but this is far from ideal because that is
the sole purpose of tracking via replication origins. This was due to
the incomplete handling of aborts in the original commit 1eb6d6527a. I
have fixed this now in a separate patch
v33-0004-Track-replication-origin-progress-for-rollbacks. If you want
to see the problem then change the below code and don't apply
v33-0004-Track-replication-origin-progress-for-rollbacks, the
regression failure is due to the reason that we are not tracking
progress for aborts:

apply_handle_rollback_prepared
{
..
if (LookupGXact(rollback_data.gid, rollback_data.prepare_end_lsn,
rollback_data.preparetime))
..
}

to
apply_handle_rollback_prepared
{
..
Assert (LookupGXact(rollback_data.gid, rollback_data.prepare_end_lsn,
rollback_data.preparetime));

--
With Regards,
Amit Kapila.

Attachments:

v33-0005-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v33-0005-Add-support-for-apply-at-prepare-time-to-built-i.patch
v33-0006-Support-2PC-documentation.patchapplication/octet-stream; name=v33-0006-Support-2PC-documentation.patch
v33-0007-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v33-0007-Support-2PC-txn-subscriber-tests.patch
v33-0008-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v33-0008-Support-2PC-txn-Subscription-option.patch
v33-0009-Support-2PC-consistent-snapshot-isolation-tests.patchapplication/octet-stream; name=v33-0009-Support-2PC-consistent-snapshot-isolation-tests.patch
v33-0010-Support-2PC-txn-tests-for-concurrent-aborts.patchapplication/octet-stream; name=v33-0010-Support-2PC-txn-tests-for-concurrent-aborts.patch
v33-0001-Extend-the-output-plugin-API-to-allow-decoding-o.patchapplication/octet-stream; name=v33-0001-Extend-the-output-plugin-API-to-allow-decoding-o.patch
v33-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer.patchapplication/octet-stream; name=v33-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer.patch
v33-0003-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v33-0003-Refactor-spool-file-logic-in-worker.c.patch
v33-0004-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v33-0004-Track-replication-origin-progress-for-rollbacks.patch
#166Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#165)

On Sat, Dec 19, 2020 at 2:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Okay, I have changed the rollback_prepare API as discussed above and
accordingly handle the case where rollback is received without prepare
in apply_handle_rollback_prepared.

I have reviewed and tested your new patchset, I agree with all the
changes that you have made and have tested quite a few scenarios and
they seem to be working as expected.
No major comments but some minor observations:

Patch 1:
logical.c: 984
Comment should be "rollback prepared" rather than "abort prepared".

Patch 2:
decode.c: 737: The comments in the header of DecodePrepare seem out of
place, I think here it should describe what the function does rather
than what it does not.
reorderbuffer.c: 2422: It looks like pg_indent has mangled the
comments, the numbering is no longer aligned.

Patch 5:
worker.c: 753: Type: change "dont" to "don't"

Patch 6: logicaldecoding.sgml
logicaldecoding example is no longer correct. This was true prior to
the changes done to replay prepared transactions after a restart. Now
the whole transaction will get decoded again after the commit
prepared.

postgres=# COMMIT PREPARED 'test_prepared1';
COMMIT PREPARED
postgres=# select * from
pg_logical_slot_get_changes('regression_slot', NULL, NULL,
'two-phase-commit', '1');
lsn | xid | data
-----------+-----+--------------------------------------------
0/168A060 | 529 | COMMIT PREPARED 'test_prepared1', txid 529
(1 row)

Patch 8:
worker.c: 2798 :
worker.c: 3445 : disabling two-phase in tablesync worker.
considering new design of multiple commits in tablesync, do we need
to disable two-phase in tablesync?

Other than this I've noticed a few typos that are not in the patch but
in the surrounding code.
logical.c: 1383: Comment should mention stream_commit_cb not stream_abort_cb.
decode.c: 686 - Extra "it's" here: "because it's it happened"

regards,
Ajin Cherian
Fujitsu Australia

#167Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#166)

On Tue, Dec 22, 2020 at 2:51 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Sat, Dec 19, 2020 at 2:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Okay, I have changed the rollback_prepare API as discussed above and
accordingly handle the case where rollback is received without prepare
in apply_handle_rollback_prepared.

I have reviewed and tested your new patchset, I agree with all the
changes that you have made and have tested quite a few scenarios and
they seem to be working as expected.
No major comments but some minor observations:

Patch 1:
logical.c: 984
Comment should be "rollback prepared" rather than "abort prepared".

Agreed.

Patch 2:
decode.c: 737: The comments in the header of DecodePrepare seem out of
place, I think here it should describe what the function does rather
than what it does not.

Hmm, I have written it because it is important to explain the theory
of concurrent aborts as that is not quite obvious. Also, the
functionality is quite similar to DecodeCommit and the comments inside
the function explain clearly if there is any difference so not sure
what additional we can write, do you have any suggestions?

reorderbuffer.c: 2422: It looks like pg_indent has mangled the
comments, the numbering is no longer aligned.

Yeah, I had also noticed that but not sure if there is a better
alternative because we don't want to change it after each pgindent
run. We might want to use (a), (b) .. notation instead but otherwise,
there is no big problem with how it is.

Patch 5:
worker.c: 753: Type: change "dont" to "don't"

Okay.

Patch 6: logicaldecoding.sgml
logicaldecoding example is no longer correct. This was true prior to
the changes done to replay prepared transactions after a restart. Now
the whole transaction will get decoded again after the commit
prepared.

postgres=# COMMIT PREPARED 'test_prepared1';
COMMIT PREPARED
postgres=# select * from
pg_logical_slot_get_changes('regression_slot', NULL, NULL,
'two-phase-commit', '1');
lsn | xid | data
-----------+-----+--------------------------------------------
0/168A060 | 529 | COMMIT PREPARED 'test_prepared1', txid 529
(1 row)

Agreed.

Patch 8:
worker.c: 2798 :
worker.c: 3445 : disabling two-phase in tablesync worker.
considering new design of multiple commits in tablesync, do we need
to disable two-phase in tablesync?

No, but let Peter's patch get committed then we can change it.

Other than this I've noticed a few typos that are not in the patch but
in the surrounding code.
logical.c: 1383: Comment should mention stream_commit_cb not stream_abort_cb.
decode.c: 686 - Extra "it's" here: "because it's it happened"

Anything not related to this patch, please post in a separate email.

Can you please update the patch for the points we agreed upon?

--
With Regards,
Amit Kapila.

#168Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#167)
10 attachment(s)

On Tue, Dec 22, 2020 at 8:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Dec 22, 2020 at 2:51 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Sat, Dec 19, 2020 at 2:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Okay, I have changed the rollback_prepare API as discussed above and
accordingly handle the case where rollback is received without prepare
in apply_handle_rollback_prepared.

I have reviewed and tested your new patchset, I agree with all the
changes that you have made and have tested quite a few scenarios and
they seem to be working as expected.
No major comments but some minor observations:

Patch 1:
logical.c: 984
Comment should be "rollback prepared" rather than "abort prepared".

Agreed.

Changed.

Patch 2:
decode.c: 737: The comments in the header of DecodePrepare seem out of
place, I think here it should describe what the function does rather
than what it does not.

Hmm, I have written it because it is important to explain the theory
of concurrent aborts as that is not quite obvious. Also, the
functionality is quite similar to DecodeCommit and the comments inside
the function explain clearly if there is any difference so not sure
what additional we can write, do you have any suggestions?

I have slightly re-worded it. Have a look.

reorderbuffer.c: 2422: It looks like pg_indent has mangled the
comments, the numbering is no longer aligned.

Yeah, I had also noticed that but not sure if there is a better
alternative because we don't want to change it after each pgindent
run. We might want to use (a), (b) .. notation instead but otherwise,
there is no big problem with how it is.

Leaving this as is.

Patch 5:
worker.c: 753: Type: change "dont" to "don't"

Okay.

Changed.

Patch 6: logicaldecoding.sgml
logicaldecoding example is no longer correct. This was true prior to
the changes done to replay prepared transactions after a restart. Now
the whole transaction will get decoded again after the commit
prepared.

postgres=# COMMIT PREPARED 'test_prepared1';
COMMIT PREPARED
postgres=# select * from
pg_logical_slot_get_changes('regression_slot', NULL, NULL,
'two-phase-commit', '1');
lsn | xid | data
-----------+-----+--------------------------------------------
0/168A060 | 529 | COMMIT PREPARED 'test_prepared1', txid 529
(1 row)

Agreed.

Changed.

Patch 8:
worker.c: 2798 :
worker.c: 3445 : disabling two-phase in tablesync worker.
considering new design of multiple commits in tablesync, do we need
to disable two-phase in tablesync?

No, but let Peter's patch get committed then we can change it.

OK, leaving it.

Can you please update the patch for the points we agreed upon?

Changed and attached.
regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v34-0005-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v34-0005-Add-support-for-apply-at-prepare-time-to-built-i.patch
v34-0004-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v34-0004-Track-replication-origin-progress-for-rollbacks.patch
v34-0001-Extend-the-output-plugin-API-to-allow-decoding-o.patchapplication/octet-stream; name=v34-0001-Extend-the-output-plugin-API-to-allow-decoding-o.patch
v34-0006-Support-2PC-documentation.patchapplication/octet-stream; name=v34-0006-Support-2PC-documentation.patch
v34-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer.patchapplication/octet-stream; name=v34-0002-Allow-decoding-at-prepare-time-in-ReorderBuffer.patch
v34-0003-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v34-0003-Refactor-spool-file-logic-in-worker.c.patch
v34-0007-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v34-0007-Support-2PC-txn-subscriber-tests.patch
v34-0008-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v34-0008-Support-2PC-txn-Subscription-option.patch
v34-0009-Support-2PC-consistent-snapshot-isolation-tests.patchapplication/octet-stream; name=v34-0009-Support-2PC-consistent-snapshot-isolation-tests.patch
v34-0010-Support-2PC-txn-tests-for-concurrent-aborts.patchapplication/octet-stream; name=v34-0010-Support-2PC-txn-tests-for-concurrent-aborts.patch
#169Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#168)

On Wed, Dec 23, 2020 at 3:08 PM Ajin Cherian <itsajin@gmail.com> wrote:

Can you please update the patch for the points we agreed upon?

Changed and attached.

Thanks, I have looked at these patches again and it seems patches 0001
to 0004 are in good shape, and among those
v33-0001-Extend-the-output-plugin-API-to-allow-decoding-o is good to
go. So, I am planning to push the first patch (0001*) in next week
sometime unless you or someone else has any comments on it.

--
With Regards,
Amit Kapila.

#170osumi.takamichi@fujitsu.com
osumi.takamichi@fujitsu.com
osumi.takamichi@fujitsu.com
In reply to: Amit Kapila (#169)
RE: [HACKERS] logical decoding of two-phase transactions

Hi, Amit-San

On Thursday, Dec 24, 2020 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Dec 23, 2020 at 3:08 PM Ajin Cherian <itsajin@gmail.com> wrote:

Can you please update the patch for the points we agreed upon?

Changed and attached.

Thanks, I have looked at these patches again and it seems patches 0001 to
0004 are in good shape, and among those
v33-0001-Extend-the-output-plugin-API-to-allow-decoding-o is good to go.
So, I am planning to push the first patch (0001*) in next week sometime
unless you or someone else has any comments on it.

I agree this from the perspective of good code quality for memory management.

I reviewed the v33 patchset by using valgrind and
conclude that the patchset of version 33th has no problem in terms of memory management.
This can be applied to v34 because the difference between the two versions are really small.

I conducted comparison of valgrind logfiles between master and master with v33 patchset applied.
I checked both testing of contrib/test-decoding and src/test/subscription of course, using valgrind.

The first reason why I reached the conclusion is that
I don't find any description of memcheck error in the log files.
I picked up and greped error message expressions in the documentation of the valgrind - [1]https://valgrind.org/docs/manual/mc-manual.html#mc-manual.errormsgs,
but there was no grep matches.

Secondly, I surveyed function stack of valgrind's 3 types of memory leak,
"Definitely lost", "Indirectly lost" and "Possibly lost" and
it turned out that the patchset didn't add any new cause of memory leak.

[1]: https://valgrind.org/docs/manual/mc-manual.html#mc-manual.errormsgs

Best Regards,
Takamichi Osumi

#171Masahiko Sawada
Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Ajin Cherian (#168)

Hi Ajin,

On Wed, Dec 23, 2020 at 6:38 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Tue, Dec 22, 2020 at 8:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Dec 22, 2020 at 2:51 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Sat, Dec 19, 2020 at 2:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Okay, I have changed the rollback_prepare API as discussed above and
accordingly handle the case where rollback is received without prepare
in apply_handle_rollback_prepared.

I have reviewed and tested your new patchset, I agree with all the
changes that you have made and have tested quite a few scenarios and
they seem to be working as expected.
No major comments but some minor observations:

Patch 1:
logical.c: 984
Comment should be "rollback prepared" rather than "abort prepared".

Agreed.

Changed.

Patch 2:
decode.c: 737: The comments in the header of DecodePrepare seem out of
place, I think here it should describe what the function does rather
than what it does not.

Hmm, I have written it because it is important to explain the theory
of concurrent aborts as that is not quite obvious. Also, the
functionality is quite similar to DecodeCommit and the comments inside
the function explain clearly if there is any difference so not sure
what additional we can write, do you have any suggestions?

I have slightly re-worded it. Have a look.

reorderbuffer.c: 2422: It looks like pg_indent has mangled the
comments, the numbering is no longer aligned.

Yeah, I had also noticed that but not sure if there is a better
alternative because we don't want to change it after each pgindent
run. We might want to use (a), (b) .. notation instead but otherwise,
there is no big problem with how it is.

Leaving this as is.

Patch 5:
worker.c: 753: Type: change "dont" to "don't"

Okay.

Changed.

Patch 6: logicaldecoding.sgml
logicaldecoding example is no longer correct. This was true prior to
the changes done to replay prepared transactions after a restart. Now
the whole transaction will get decoded again after the commit
prepared.

postgres=# COMMIT PREPARED 'test_prepared1';
COMMIT PREPARED
postgres=# select * from
pg_logical_slot_get_changes('regression_slot', NULL, NULL,
'two-phase-commit', '1');
lsn | xid | data
-----------+-----+--------------------------------------------
0/168A060 | 529 | COMMIT PREPARED 'test_prepared1', txid 529
(1 row)

Agreed.

Changed.

Patch 8:
worker.c: 2798 :
worker.c: 3445 : disabling two-phase in tablesync worker.
considering new design of multiple commits in tablesync, do we need
to disable two-phase in tablesync?

No, but let Peter's patch get committed then we can change it.

OK, leaving it.

Can you please update the patch for the points we agreed upon?

Changed and attached.

Thank you for updating the patches!

I realized that this patch is not registered yet for the next
CommitFest[1]https://commitfest.postgresql.org/31/ that starts in a couple of days. I found the old entry
of this patch[2]https://commitfest.postgresql.org/22/944/ but it's marked as "Returned with feedback". Although
this patch is being reviewed actively, I suggest you adding it before
2021-01-01 AoE[2]https://commitfest.postgresql.org/22/944/ so cfbot also can test your patch.

Regards,

[1]: https://commitfest.postgresql.org/31/
[2]: https://commitfest.postgresql.org/22/944/
[3]: https://en.wikipedia.org/wiki/Anywhere_on_Earth

--
Masahiko Sawada
EnterpriseDB: https://www.enterprisedb.com/

#172Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Masahiko Sawada (#171)
1 attachment(s)

Hi Sawada-san,

I think Amit has a plan to commit this patch-set in phases. I will
leave it to him to decide because I think he has a plan.
I took time to refactor the test_decoding isolation test for
consistent snapshot so that it uses just 3 sessions rather than 4.
Posting an updated patch-0009

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v34-0009-Support-2PC-consistent-snapshot-isolation-tests.patchapplication/octet-stream; name=v34-0009-Support-2PC-consistent-snapshot-isolation-tests.patch
#173Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#172)

On Tue, Dec 29, 2020 at 3:15 PM Ajin Cherian <itsajin@gmail.com> wrote:

Hi Sawada-san,

I think Amit has a plan to commit this patch-set in phases.

I have pushed the first patch and I would like to make a few changes
in the second patch after which I will post the new version. I'll try
to do that tomorrow if possible and register the patch.

I will
leave it to him to decide because I think he has a plan.
I took time to refactor the test_decoding isolation test for
consistent snapshot so that it uses just 3 sessions rather than 4.
Posting an updated patch-0009

Thanks, I will look into this.

--
With Regards,
Amit Kapila.

#174Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#173)
8 attachment(s)

On Wed, Dec 30, 2020 at 6:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Dec 29, 2020 at 3:15 PM Ajin Cherian <itsajin@gmail.com> wrote:

Hi Sawada-san,

I think Amit has a plan to commit this patch-set in phases.

I have pushed the first patch and I would like to make a few changes
in the second patch after which I will post the new version. I'll try
to do that tomorrow if possible and register the patch.

Please find attached a rebased version of this patch-set. I have made
a number of changes in the
v35-0001-Allow-decoding-at-prepare-time-in-ReorderBuffer.

1. Centralize the logic to decide whether to perform decoding at
prepare time in FilterPrepare function.
2. Changed comments atop DecodePrepare. I didn't like much the
comments changed by Ajin in the last patch.
3. Merged the doc changes patch after some changes mostly cosmetic.

I am planning to commit the first patch in this series early next week
after reading it once more.

--
With Regards,
Amit Kapila.

Attachments:

v35-0001-Allow-decoding-at-prepare-time-in-ReorderBuffer.patchapplication/octet-stream; name=v35-0001-Allow-decoding-at-prepare-time-in-ReorderBuffer.patch
v35-0002-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v35-0002-Refactor-spool-file-logic-in-worker.c.patch
v35-0003-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v35-0003-Track-replication-origin-progress-for-rollbacks.patch
v35-0004-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v35-0004-Add-support-for-apply-at-prepare-time-to-built-i.patch
v35-0005-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v35-0005-Support-2PC-txn-subscriber-tests.patch
v35-0006-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v35-0006-Support-2PC-txn-Subscription-option.patch
v35-0007-Support-2PC-consistent-snapshot-isolation-tests.patchapplication/octet-stream; name=v35-0007-Support-2PC-consistent-snapshot-isolation-tests.patch
v35-0008-Support-2PC-txn-tests-for-concurrent-aborts.patchapplication/octet-stream; name=v35-0008-Support-2PC-txn-tests-for-concurrent-aborts.patch
#175Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#174)

On Thu, Dec 31, 2020 at 10:48 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Dec 30, 2020 at 6:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Dec 29, 2020 at 3:15 PM Ajin Cherian <itsajin@gmail.com> wrote:

Hi Sawada-san,

I think Amit has a plan to commit this patch-set in phases.

I have pushed the first patch and I would like to make a few changes
in the second patch after which I will post the new version. I'll try
to do that tomorrow if possible and register the patch.

Please find attached a rebased version of this patch-set.

Registered in CF (https://commitfest.postgresql.org/31/2914/).

--
With Regards,
Amit Kapila.

#176Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#174)

On Thu, Dec 31, 2020 at 4:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

3. Merged the doc changes patch after some changes mostly cosmetic.

Some minor comments here:

v35-0001 - logicaldecoding.sgml

In the example section:
Change "The following example shows SQL interface can be used to
decode prepared transactions."
to "The following example shows the SQL interface that can be used to
decode prepared transactions."

Then in "Two-phase commit support for Logical Decoding" page:
Change "To support streaming of two-phase commands, an output plugin
needs to provide the additional callbacks."
to "To support streaming of two-phase commands, an output plugin needs
to provide additional callbacks."

Other than that, I have no more comments.

regards,
Ajin Cherian
Fujitsu Australia

#177Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#176)

On Thu, Dec 31, 2020 at 12:31 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Thu, Dec 31, 2020 at 4:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

3. Merged the doc changes patch after some changes mostly cosmetic.

Some minor comments here:

v35-0001 - logicaldecoding.sgml

In the example section:
Change "The following example shows SQL interface can be used to
decode prepared transactions."
to "The following example shows the SQL interface that can be used to
decode prepared transactions."

Then in "Two-phase commit support for Logical Decoding" page:
Change "To support streaming of two-phase commands, an output plugin
needs to provide the additional callbacks."
to "To support streaming of two-phase commands, an output plugin needs
to provide additional callbacks."

Other than that, I have no more comments.

Thanks, I have pushed the 0001* patch after making the above and a few
other cosmetic modifications.

--
With Regards,
Amit Kapila.

#178Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#172)

On Tue, Dec 29, 2020 at 3:15 PM Ajin Cherian <itsajin@gmail.com> wrote:

Hi Sawada-san,

I think Amit has a plan to commit this patch-set in phases. I will
leave it to him to decide because I think he has a plan.
I took time to refactor the test_decoding isolation test for
consistent snapshot so that it uses just 3 sessions rather than 4.
Posting an updated patch-0009

I have reviewed this test case patch and have the below comments:

1.
+step "s1checkpoint" { CHECKPOINT; }
...
+step "s2alter" { ALTER TABLE do_write ADD COLUMN addedbys2 int; }

I don't see the need for the above steps and we should be able to
generate the required scenario without these as well. Is there any
reason to keep those?

2.
"s3c""s1insert"

space is missing between these two.

3.
+# Force building of a consistent snapshot between a PREPARE and
COMMIT PREPARED.
+# Ensure that the whole transaction is decoded fresh at the time of
COMMIT PREPARED.
+permutation "s2b" "s2txid" "s1init" "s3b" "s3txid" "s2alter" "s2c"
"s2b" "s2insert" "s2prepare" "s3c""s1insert" "s1checkpoint" "s1start"
"s2commit" "s1start"

I think we can update the above comments to indicate how and which
important steps help us to realize the required scenario. See
subxact_without_top.spec for reference.

4.
+step "s2c" { COMMIT; }
...
+step "s2prepare" { PREPARE TRANSACTION 'test1'; }
+step "s2commit" { COMMIT PREPARED 'test1'; }

s2c and s2commit seem to be confusing names as both sounds like doing
the same thing. How about changing s2commit to s2cp and s2prepare to
s2p?

--
With Regards,
Amit Kapila.

#179Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#178)
1 attachment(s)

On Tue, Jan 5, 2021 at 5:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have reviewed this test case patch and have the below comments:

1.
+step "s1checkpoint" { CHECKPOINT; }
...
+step "s2alter" { ALTER TABLE do_write ADD COLUMN addedbys2 int; }

I don't see the need for the above steps and we should be able to
generate the required scenario without these as well. Is there any
reason to keep those?

Removed.

2.
"s3c""s1insert"

space is missing between these two.

Updated.

3.
+# Force building of a consistent snapshot between a PREPARE and
COMMIT PREPARED.
+# Ensure that the whole transaction is decoded fresh at the time of
COMMIT PREPARED.
+permutation "s2b" "s2txid" "s1init" "s3b" "s3txid" "s2alter" "s2c"
"s2b" "s2insert" "s2prepare" "s3c""s1insert" "s1checkpoint" "s1start"
"s2commit" "s1start"

I think we can update the above comments to indicate how and which
important steps help us to realize the required scenario. See
subxact_without_top.spec for reference.

Added more comments to explain the state change of logical decoding.

4.
+step "s2c" { COMMIT; }
...
+step "s2prepare" { PREPARE TRANSACTION 'test1'; }
+step "s2commit" { COMMIT PREPARED 'test1'; }

s2c and s2commit seem to be confusing names as both sounds like doing
the same thing. How about changing s2commit to s2cp and s2prepare to
s2p?

Updated.

I've addressed the above comments and the patch is attached. I've
called it v36-0007.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v36-0007-Support-2PC-consistent-snapshot-isolation-tests.patchapplication/octet-stream; name=v36-0007-Support-2PC-consistent-snapshot-isolation-tests.patch
#180Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#179)

On Tue, Jan 5, 2021 at 2:11 PM Ajin Cherian <itsajin@gmail.com> wrote:

I've addressed the above comments and the patch is attached. I've
called it v36-0007.

Thanks, I have pushed this after minor wordsmithing.

--
With Regards,
Amit Kapila.

#181Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#167)

On Tue, Dec 22, 2020 at 3:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Dec 22, 2020 at 2:51 PM Ajin Cherian <itsajin@gmail.com> wrote:

Other than this I've noticed a few typos that are not in the patch but
in the surrounding code.
logical.c: 1383: Comment should mention stream_commit_cb not stream_abort_cb.
decode.c: 686 - Extra "it's" here: "because it's it happened"

Anything not related to this patch, please post in a separate email.

Pushed the fix for above reported typos.

--
With Regards,
Amit Kapila.

#182Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#180)

On Tue, Jan 5, 2021 at 4:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Jan 5, 2021 at 2:11 PM Ajin Cherian <itsajin@gmail.com> wrote:

I've addressed the above comments and the patch is attached. I've
called it v36-0007.

Thanks, I have pushed this after minor wordsmithing.

The test case is failing on one of the build farm machines. See email
from Tom Lane [1]/messages/by-id/363512.1610171267@sss.pgh.pa.us. The symptom clearly shows that we are decoding
empty xacts which can happen due to background activity by autovacuum.
I think we need a fix similar to what we have done in
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=82a0ba7707e010a29f5fe1a0020d963c82b8f1cb.

I'll try to reproduce and provide a fix for this later today or tomorrow.

[1]: /messages/by-id/363512.1610171267@sss.pgh.pa.us

--
With Regards,
Amit Kapila.

#183Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#182)

On Sat, Jan 9, 2021 at 12:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Jan 5, 2021 at 4:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Jan 5, 2021 at 2:11 PM Ajin Cherian <itsajin@gmail.com> wrote:

I've addressed the above comments and the patch is attached. I've
called it v36-0007.

Thanks, I have pushed this after minor wordsmithing.

The test case is failing on one of the build farm machines. See email
from Tom Lane [1]. The symptom clearly shows that we are decoding
empty xacts which can happen due to background activity by autovacuum.
I think we need a fix similar to what we have done in
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=82a0ba7707e010a29f5fe1a0020d963c82b8f1cb.

I'll try to reproduce and provide a fix for this later today or tomorrow.

I have pushed the fix.

--
With Regards,
Amit Kapila.

#184Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#183)
7 attachment(s)

Please find attached the new patch set v37.

This patch set v37* is now rebased to use the most recent tablesync
patch from the other thread [1]/messages/by-id/CAA4eK1KHJxaZS-fod-0fey=0tq3=Gkn4ho=8N4-5HWiCfu0H1A@mail.gmail.com.
i.e. notice that v37-0001 is an exact copy of the
v17-0001-tablesync-Solution1.patch

Details how v37* patches relate to earlier patches is shown below:

======
v35-0001 -> committed -> NA
v17-0001-Tablesync-Solution1 -> (copy from [1]/messages/by-id/CAA4eK1KHJxaZS-fod-0fey=0tq3=Gkn4ho=8N4-5HWiCfu0H1A@mail.gmail.com) -> v37-0001
v35-0002 -> (unchanged) -> v37-0002
v35-0003 -> (unchanged) -> v37-0003
v35-0004 -> (modify code, apply_handle_prepare changed for tablesync
worker) -> v37-0004
v35-0005 -> (unchanged) --> v37-0005
v35-0006 -> (modify code, twophase mode is now same for
tablesync/apply slots) -> v37-0006
v35-0007 -> v36-0007 -> committed -> NA
v35-0008 -> (unchanged) -> v37-0007
======

----
[1]: /messages/by-id/CAA4eK1KHJxaZS-fod-0fey=0tq3=Gkn4ho=8N4-5HWiCfu0H1A@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v37-0001-Tablesync-Solution1.patchapplication/octet-stream; name=v37-0001-Tablesync-Solution1.patch
v37-0003-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v37-0003-Track-replication-origin-progress-for-rollbacks.patch
v37-0002-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v37-0002-Refactor-spool-file-logic-in-worker.c.patch
v37-0004-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v37-0004-Add-support-for-apply-at-prepare-time-to-built-i.patch
v37-0005-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v37-0005-Support-2PC-txn-subscriber-tests.patch
v37-0006-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v37-0006-Support-2PC-txn-Subscription-option.patch
v37-0007-Support-2PC-txn-tests-for-concurrent-aborts.patchapplication/octet-stream; name=v37-0007-Support-2PC-txn-tests-for-concurrent-aborts.patch
#185Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#184)
7 attachment(s)

PSA the new patch set v38*.

This patch set has been rebased to use the most recent tablesync patch
from other thread [1]/messages/by-id/CAA4eK1KHJxaZS-fod-0fey=0tq3=Gkn4ho=8N4-5HWiCfu0H1A@mail.gmail.com
(i.e. notice that v38-0001 is an exact copy of that thread's tablesync
patch v31)

----
[1]: /messages/by-id/CAA4eK1KHJxaZS-fod-0fey=0tq3=Gkn4ho=8N4-5HWiCfu0H1A@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v38-0002-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v38-0002-Refactor-spool-file-logic-in-worker.c.patch
v38-0004-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v38-0004-Add-support-for-apply-at-prepare-time-to-built-i.patch
v38-0005-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v38-0005-Support-2PC-txn-subscriber-tests.patch
v38-0001-Tablesync-V31.patchapplication/octet-stream; name=v38-0001-Tablesync-V31.patch
v38-0003-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v38-0003-Track-replication-origin-progress-for-rollbacks.patch
v38-0006-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v38-0006-Support-2PC-txn-Subscription-option.patch
v38-0007-Support-2PC-txn-tests-for-concurrent-aborts.patchapplication/octet-stream; name=v38-0007-Support-2PC-txn-tests-for-concurrent-aborts.patch
#186Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#185)

On Wed, Feb 10, 2021 at 3:59 PM Peter Smith <smithpb2250@gmail.com> wrote:

PSA the new patch set v38*.

This patch set has been rebased to use the most recent tablesync patch
from other thread [1]
(i.e. notice that v38-0001 is an exact copy of that thread's tablesync
patch v31)

I see one problem which might lead to the skip of prepared xacts for
some of the subscriptions. The problem is that we skip the prepared
xacts based on GID and the same prepared transaction arrives on the
subscriber for different subscriptions. And even if we wouldn't have
skipped the prepared xact, it would have lead to an error "transaction
identifier "p1" is already in use". See the scenario below:

On Publisher:
===========
CREATE TABLE mytbl1(id SERIAL PRIMARY KEY, somedata int, text varchar(120));
CREATE TABLE mytbl2(id SERIAL PRIMARY KEY, somedata int, text varchar(120));
postgres=# BEGIN;
BEGIN
postgres=*# INSERT INTO mytbl1(somedata, text) VALUES (1, 1);
INSERT 0 1
postgres=*# INSERT INTO mytbl1(somedata, text) VALUES (1, 2);
INSERT 0 1
postgres=*# COMMIT;
COMMIT
postgres=# BEGIN;
BEGIN
postgres=*# INSERT INTO mytbl2(somedata, text) VALUES (1, 1);
INSERT 0 1
postgres=*# INSERT INTO mytbl2(somedata, text) VALUES (1, 2);
INSERT 0 1
postgres=*# Commit;
COMMIT
postgres=# CREATE PUBLICATION mypub1 FOR TABLE mytbl1;
CREATE PUBLICATION
postgres=# CREATE PUBLICATION mypub2 FOR TABLE mytbl2;
CREATE PUBLICATION

On Subscriber:
============
CREATE TABLE mytbl1(id SERIAL PRIMARY KEY, somedata int, text varchar(120));
CREATE TABLE mytbl2(id SERIAL PRIMARY KEY, somedata int, text varchar(120));
postgres=# CREATE SUBSCRIPTION mysub1
postgres-# CONNECTION 'host=localhost port=5432 dbname=postgres'
postgres-# PUBLICATION mypub1;
NOTICE: created replication slot "mysub1" on publisher
CREATE SUBSCRIPTION
postgres=# CREATE SUBSCRIPTION mysub2
postgres-# CONNECTION 'host=localhost port=5432 dbname=postgres'
postgres-# PUBLICATION mypub2;
NOTICE: created replication slot "mysub2" on publisher
CREATE SUBSCRIPTION

On Publisher:
============
postgres=# Begin;
BEGIN
postgres=*# INSERT INTO mytbl1(somedata, text) VALUES (1, 3);
INSERT 0 1
postgres=*# INSERT INTO mytbl2(somedata, text) VALUES (1, 3);
INSERT 0 1
postgres=*# Prepare Transaction 'myprep1';

After this step, wait for few seconds and then perform Commit Prepared
'myprep1'; on Publisher and you will notice following error in the
subscriber log: "ERROR: prepared transaction with identifier
"myprep1" does not exist"

One idea to avoid this is that we use subscription_id along with GID
on subscription for prepared xacts. Let me know if you have any better
ideas to handle this?

Few other minor comments on
v38-0004-Add-support-for-apply-at-prepare-time-to-built-i:
======================================================================
1.
- * Mark the prepared transaction as valid.  As soon as xact.c marks
- * MyProc as not running our XID (which it will do immediately after
- * this function returns), others can commit/rollback the xact.
+ * Mark the prepared transaction as valid.  As soon as xact.c marks MyProc
+ * as not running our XID (which it will do immediately after this
+ * function returns), others can commit/rollback the xact.

Why this change in this patch? Is it due to pgindent? If so, you need
to exclude this change?

2.
@@ -78,7 +78,7 @@ logicalrep_write_commit(StringInfo out, ReorderBufferTXN *txn,

pq_sendbyte(out, LOGICAL_REP_MSG_COMMIT);

- /* send the flags field (unused for now) */
+ /* send the flags field */
  pq_sendbyte(out, flags);

Is there a reason to change the above comment?

--
With Regards,
Amit Kapila.

#187Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#186)
7 attachment(s)

On Thu, Feb 11, 2021 at 12:46 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

Few other minor comments on
v38-0004-Add-support-for-apply-at-prepare-time-to-built-i:
======================================================================
1.
- * Mark the prepared transaction as valid.  As soon as xact.c marks
- * MyProc as not running our XID (which it will do immediately after
- * this function returns), others can commit/rollback the xact.
+ * Mark the prepared transaction as valid.  As soon as xact.c marks MyProc
+ * as not running our XID (which it will do immediately after this
+ * function returns), others can commit/rollback the xact.

Why this change in this patch? Is it due to pgindent? If so, you need
to exclude this change?

Fixed in V39.

2.
@@ -78,7 +78,7 @@ logicalrep_write_commit(StringInfo out, ReorderBufferTXN *txn,

pq_sendbyte(out, LOGICAL_REP_MSG_COMMIT);

- /* send the flags field (unused for now) */
+ /* send the flags field */
pq_sendbyte(out, flags);

Is there a reason to change the above comment?

Fixed in V39.

----------

Please find attached the new 2PC patch set v39*

This fixes some recent feedback comments (see above).

----
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v39-0005-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v39-0005-Support-2PC-txn-subscriber-tests.patch
v39-0001-Tablesync-V31.patchapplication/octet-stream; name=v39-0001-Tablesync-V31.patch
v39-0006-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v39-0006-Support-2PC-txn-Subscription-option.patch
v39-0007-Support-2PC-txn-tests-for-concurrent-aborts.patchapplication/octet-stream; name=v39-0007-Support-2PC-txn-tests-for-concurrent-aborts.patch
v39-0002-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v39-0002-Refactor-spool-file-logic-in-worker.c.patch
v39-0003-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v39-0003-Track-replication-origin-progress-for-rollbacks.patch
v39-0004-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v39-0004-Add-support-for-apply-at-prepare-time-to-built-i.patch
#188osumi.takamichi@fujitsu.com
osumi.takamichi@fujitsu.com
osumi.takamichi@fujitsu.com
In reply to: Peter Smith (#187)
RE: [HACKERS] logical decoding of two-phase transactions

Hi

On Thursday, February 11, 2021 5:10 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the new 2PC patch set v39*

I started to review the patchset
so, let me give some comments I have at this moment.

(1)

File : v39-0007-Support-2PC-txn-tests-for-concurrent-aborts.patch
Modification :

@@ -620,6 +666,9 @@ pg_decode_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
}
txndata->xact_wrote_changes = true;

+       /* For testing concurrent  aborts */
+       test_concurrent_aborts(data);
+
        class_form = RelationGetForm(relation);
        tupdesc = RelationGetDescr(relation);

Comment : There are unnecessary whitespaces in comments like above in v37-007
Please check such as pg_decode_change(), pg_decode_truncate(), pg_decode_stream_truncate() as well.
I suggest you align the code formats by pgindent.

(2)

File : v39-0006-Support-2PC-txn-Subscription-option.patch

@@ -213,6 +219,15 @@ parse_subscription_options(List *options,
                        *streaming_given = true;
                        *streaming = defGetBoolean(defel);
                }
+               else if (strcmp(defel->defname, "two_phase") == 0 && twophase)
+               {
+                       if (*twophase_given)
+                               ereport(ERROR,
+                                               (errcode(ERRCODE_SYNTAX_ERROR),
+                                                errmsg("conflicting or redundant options")));
+                       *twophase_given = true;
+                       *twophase = defGetBoolean(defel);
+               }

You can add this test in subscription.sql easily with double twophase options.

When I find something else, I'll let you know.

Best Regards,
Takamichi Osumi

#189Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: osumi.takamichi@fujitsu.com (#188)

On Fri, Feb 12, 2021 at 12:29 PM osumi.takamichi@fujitsu.com
<osumi.takamichi@fujitsu.com> wrote:

On Thursday, February 11, 2021 5:10 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the new 2PC patch set v39*

I started to review the patchset
so, let me give some comments I have at this moment.

(1)

File : v39-0007-Support-2PC-txn-tests-for-concurrent-aborts.patch
Modification :

@@ -620,6 +666,9 @@ pg_decode_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
}
txndata->xact_wrote_changes = true;

+       /* For testing concurrent  aborts */
+       test_concurrent_aborts(data);
+
class_form = RelationGetForm(relation);
tupdesc = RelationGetDescr(relation);

Comment : There are unnecessary whitespaces in comments like above in v37-007
Please check such as pg_decode_change(), pg_decode_truncate(), pg_decode_stream_truncate() as well.
I suggest you align the code formats by pgindent.

This patch (v39-0007-Support-2PC-txn-tests-for-concurrent-aborts.patch)
is mostly for dev-testing purpose. We don't intend to commit as this
has a lot of timing-dependent tests and I am not sure if it is
valuable enough at this stage. So, we can ignore cosmetic comments in
this patch for now.

--
With Regards,
Amit Kapila.

#190Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#187)
6 attachment(s)

Please find attached the new patch set v40*

The tablesync patch [1]/messages/by-id/CAA4eK1KHJxaZS-fod-0fey=0tq3=Gkn4ho=8N4-5HWiCfu0H1A@mail.gmail.com was already committed [2]https://github.com/postgres/postgres/commit/ce0fdbfe9722867b7fad4d3ede9b6a6bfc51fb4e, so the v39-0001
patch is no longer required.

v40* has been rebased to HEAD.

----
[1]: /messages/by-id/CAA4eK1KHJxaZS-fod-0fey=0tq3=Gkn4ho=8N4-5HWiCfu0H1A@mail.gmail.com
[2]: https://github.com/postgres/postgres/commit/ce0fdbfe9722867b7fad4d3ede9b6a6bfc51fb4e

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v40-0005-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v40-0005-Support-2PC-txn-Subscription-option.patch
v40-0001-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v40-0001-Refactor-spool-file-logic-in-worker.c.patch
v40-0002-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v40-0002-Track-replication-origin-progress-for-rollbacks.patch
v40-0003-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v40-0003-Add-support-for-apply-at-prepare-time-to-built-i.patch
v40-0004-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v40-0004-Support-2PC-txn-subscriber-tests.patch
v40-0006-Support-2PC-txn-tests-for-concurrent-aborts.patchapplication/octet-stream; name=v40-0006-Support-2PC-txn-tests-for-concurrent-aborts.patch
#191Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: osumi.takamichi@fujitsu.com (#188)

On Fri, Feb 12, 2021 at 5:59 PM osumi.takamichi@fujitsu.com
<osumi.takamichi@fujitsu.com> wrote:

(2)

File : v39-0006-Support-2PC-txn-Subscription-option.patch

@@ -213,6 +219,15 @@ parse_subscription_options(List *options,
*streaming_given = true;
*streaming = defGetBoolean(defel);
}
+               else if (strcmp(defel->defname, "two_phase") == 0 && twophase)
+               {
+                       if (*twophase_given)
+                               ereport(ERROR,
+                                               (errcode(ERRCODE_SYNTAX_ERROR),
+                                                errmsg("conflicting or redundant options")));
+                       *twophase_given = true;
+                       *twophase = defGetBoolean(defel);
+               }

You can add this test in subscription.sql easily with double twophase options.

Thanks for the feedback. You are right.

But in the pgoutput.c there are several other potential syntax errors
"conflicting or redundant options" which are just like this
"two_phase" one.
e.g. there is the same error for options "proto_version",
"publication_names", "binary", "streaming".

AFAIK none of those other syntax errors had any regression tests. That
is the reason why I did not include any new test for the "two_phase"
option.

So:
a) should I add a new test per your feedback comment, or
b) should I be consistent with the other similar errors, and not add the test?

Of course it is easy to add a new test if you think option (a) is best.

Thoughts?

-----
Kind Regards,
Peter Smith.
Fujitsu Australia

#192osumi.takamichi@fujitsu.com
osumi.takamichi@fujitsu.com
osumi.takamichi@fujitsu.com
In reply to: Peter Smith (#191)
RE: [HACKERS] logical decoding of two-phase transactions

Hi

On Tuesday, February 16, 2021 8:33 AM Peter Smith <smithpb2250@gmail.com>

On Fri, Feb 12, 2021 at 5:59 PM osumi.takamichi@fujitsu.com
<osumi.takamichi@fujitsu.com> wrote:

(2)

File : v39-0006-Support-2PC-txn-Subscription-option.patch

@@ -213,6 +219,15 @@ parse_subscription_options(List *options,
*streaming_given = true;
*streaming = defGetBoolean(defel);
}
+               else if (strcmp(defel->defname, "two_phase") == 0 &&

twophase)

+               {
+                       if (*twophase_given)
+                               ereport(ERROR,
+

(errcode(ERRCODE_SYNTAX_ERROR),

+ errmsg("conflicting or

redundant options")));

+                       *twophase_given = true;
+                       *twophase = defGetBoolean(defel);
+               }

You can add this test in subscription.sql easily with double twophase

options.

Thanks for the feedback. You are right.

But in the pgoutput.c there are several other potential syntax errors
"conflicting or redundant options" which are just like this "two_phase" one.
e.g. there is the same error for options "proto_version", "publication_names",
"binary", "streaming".

AFAIK none of those other syntax errors had any regression tests. That is the
reason why I did not include any new test for the "two_phase"
option.

So:
a) should I add a new test per your feedback comment, or
b) should I be consistent with the other similar errors, and not add the test?

Of course it is easy to add a new test if you think option (a) is best.

Thoughts?

OK. Then, we can think previously, such tests for other options are
regarded as needless because the result are too apparent.
Let's choose (b) to make the patch set aligned with other similar past codes.
Thanks.

Best Regards,
Takamichi Osumi

#193Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#190)
6 attachment(s)

Please find attached the new patch set v41*

(v40* needed to be rebased to current HEAD)

----
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v41-0001-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v41-0001-Refactor-spool-file-logic-in-worker.c.patch
v41-0005-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v41-0005-Support-2PC-txn-Subscription-option.patch
v41-0003-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v41-0003-Add-support-for-apply-at-prepare-time-to-built-i.patch
v41-0002-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v41-0002-Track-replication-origin-progress-for-rollbacks.patch
v41-0004-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v41-0004-Support-2PC-txn-subscriber-tests.patch
v41-0006-Support-2PC-txn-tests-for-concurrent-aborts.patchapplication/octet-stream; name=v41-0006-Support-2PC-txn-tests-for-concurrent-aborts.patch
#194Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#193)

On Thu, Feb 18, 2021 at 5:48 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the new patch set v41*

I see one issue here. Currently, when we create a subscription, we
first launch apply-worker and create the main apply worker slot and
then launch table sync workers as required. Now, assume, the apply
worker slot is created and after that, we launch tablesync worker,
which will initiate its slot (sync_slot) creation. Then, on the
publisher-side, the situation is such that there is a prepared
transaction that happens before we reach a consistent snapshot. We can
assume the exact scenario as we have in twophase_snapshot.spec where
we skip prepared xact due to this reason.

Because the WALSender corresponding to apply worker is already running
so it will be in consistent state, for it, such a prepared xact can be
decoded and it will send the same to the subscriber. On the
subscriber-side, it can skip applying the data-modification operations
because the corresponding rel is still not in a ready state (see
should_apply_changes_for_rel and its callers) simply because the
corresponding table sync worker is not finished yet. But prepare will
occur and it will lead to a prepared transaction on the subscriber.

In this situation, tablesync worker has skipped prepare because the
snapshot was not consistent and then it exited because it is in sync
with the apply worker. And apply worker has skipped because tablesync
was in-progress. Later when Commit prepared will come, the
apply-worker will simply commit the previously prepared transaction
and we will never see the prepared transaction data.

So, the basic premise is that we can't allow tablesync workers to skip
prepared transactions (which can be processed by apply worker) and
process later commits.

I have one idea to address this. When we get the first begin_prepare
in the apply-worker, we can check if there are any relations in
"not_ready" state and if so then just wait till all the relations
become in sync with the apply worker. This is to avoid that any of the
tablesync workers might skip prepared xact and we don't want apply
worker to also skip the same.

Now, it is possible that some tablesync worker has copied the data and
moved the sync position ahead of where the current apply worker's
position is. In such a case, we need to process transactions in apply
worker such that we can process commits if any, and write prepared
transactions to file. For prepared transactions, we can take decisions
only once the commit prepared for them has arrived.

--
With Regards,
Amit Kapila.

#195Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#193)
5 attachment(s)

Please find attached the new patch set v42*

This removes the (development only) patch v41-0006 which was causing
some random cfbot fails.

----
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v42-0001-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v42-0001-Refactor-spool-file-logic-in-worker.c.patch
v42-0002-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v42-0002-Track-replication-origin-progress-for-rollbacks.patch
v42-0005-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v42-0005-Support-2PC-txn-Subscription-option.patch
v42-0003-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v42-0003-Add-support-for-apply-at-prepare-time-to-built-i.patch
v42-0004-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v42-0004-Support-2PC-txn-subscriber-tests.patch
#196Markus Wanner
Markus Wanner
markus@bluegap.ch
In reply to: Amit Kapila (#177)

Hello Amit,

On 04.01.21 09:18, Amit Kapila wrote:

Thanks, I have pushed the 0001* patch after making the above and a few
other cosmetic modifications.

That commit added the following snippet to the top of
ReorderBufferFinishPrepared:

txn = ReorderBufferTXNByXid(rb, xid, true, NULL, commit_lsn, false);

/* unknown transaction, nothing to do */
if (txn == NULL)
return;

Passing true for the create argument seems like an oversight. I think
this should pass false and not ever (have to) create a ReorderBufferTXN
entry.

Regards

Markus

#197Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Markus Wanner (#196)

On Mon, Feb 22, 2021 at 11:04 PM Markus Wanner <markus@bluegap.ch> wrote:

On 04.01.21 09:18, Amit Kapila wrote:

Thanks, I have pushed the 0001* patch after making the above and a few
other cosmetic modifications.

That commit added the following snippet to the top of
ReorderBufferFinishPrepared:

txn = ReorderBufferTXNByXid(rb, xid, true, NULL, commit_lsn, false);

/* unknown transaction, nothing to do */
if (txn == NULL)
return;

Passing true for the create argument seems like an oversight. I think
this should pass false and not ever (have to) create a ReorderBufferTXN
entry.

Right, I'll push a fix for this. Thanks for the report!

--
With Regards,
Amit Kapila.

#198Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#197)

On Tue, Feb 23, 2021 at 7:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Feb 22, 2021 at 11:04 PM Markus Wanner <markus@bluegap.ch> wrote:

On 04.01.21 09:18, Amit Kapila wrote:

Thanks, I have pushed the 0001* patch after making the above and a few
other cosmetic modifications.

That commit added the following snippet to the top of
ReorderBufferFinishPrepared:

txn = ReorderBufferTXNByXid(rb, xid, true, NULL, commit_lsn, false);

/* unknown transaction, nothing to do */
if (txn == NULL)
return;

Passing true for the create argument seems like an oversight. I think
this should pass false and not ever (have to) create a ReorderBufferTXN
entry.

Right, I'll push a fix for this.

Pushed!

--
With Regards,
Amit Kapila.

#199Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#195)
8 attachment(s)

Please find attached the latest patch set v43*

Differences from v42*

- Rebased to HEAD as @ today

- Added new patch 0006 "Tablesync early exit" as discussed here [1]/messages/by-id/CAHut+Ptjk-Qgd3R1a1_tr62CmiswcYphuv0pLmVA-+2s8r0Bkw@mail.gmail.com

- Added new patch 0007 "Fix apply worker prepare" as discussed here [2]/messages/by-id/CAA4eK1L=dhuCRvyDvrXX5wZgc7s1hLRD29CKCK6oaHtVCPgiFA@mail.gmail.com

- Added new patch 0008 "Fix apply worker prepare (dev logs)" (to aid
testing of patch 0007)

~~

(The 0006 patch has a known whitespace problem. I will fix that next time)

----
[1]: /messages/by-id/CAHut+Ptjk-Qgd3R1a1_tr62CmiswcYphuv0pLmVA-+2s8r0Bkw@mail.gmail.com
[2]: /messages/by-id/CAA4eK1L=dhuCRvyDvrXX5wZgc7s1hLRD29CKCK6oaHtVCPgiFA@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v43-0001-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v43-0001-Refactor-spool-file-logic-in-worker.c.patch
v43-0002-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v43-0002-Track-replication-origin-progress-for-rollbacks.patch
v43-0003-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v43-0003-Add-support-for-apply-at-prepare-time-to-built-i.patch
v43-0005-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v43-0005-Support-2PC-txn-Subscription-option.patch
v43-0004-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v43-0004-Support-2PC-txn-subscriber-tests.patch
v43-0006-Tablesync-early-exit.patchapplication/octet-stream; name=v43-0006-Tablesync-early-exit.patch
v43-0007-Fix-apply-worker-empty-prepare.patchapplication/octet-stream; name=v43-0007-Fix-apply-worker-empty-prepare.patch
v43-0008-Fix-apply-worker-empty-prepare-dev-logs.patchapplication/octet-stream; name=v43-0008-Fix-apply-worker-empty-prepare-dev-logs.patch
#200onlinebusinessindia
onlinebusinessindia
businessgrowthnamanverma@gmail.com
In reply to: Peter Smith (#199)
Re: logical decoding of two-phase transactions

That's where you've misunderstood - it isn't committed yet. The point or
this change is to allow us to do logical decoding at the PREPARE
TRANSACTION
point. The xact is not yet committed or rolled back.

Yes, I got that. I was looking for a why or an actual use-case.

Stas wants this for a conflict-free logical semi-synchronous replication
multi master solution.

This sentence is hard to decrypt, less without "multi master" as the
concept applies basically only to only one master node.

At PREPARE TRANSACTION time we replay the xact to
other nodes, each of which applies it and PREPARE TRANSACTION, then
replies
to confirm it has successfully prepared the xact. When all nodes confirm
the
xact is prepared it is safe for the origin node to COMMIT PREPARED. The
other nodes then see hat the first node has committed and they commit too.

OK, this is the argument I was looking for. So in your schema the
origin node, the one generating the changes, is itself in charge of
deciding if the 2PC should work or not. There are two channels between
the origin node and the replicas replaying the logical changes, one is
for the logical decoder with a receiver, the second one is used to
communicate the WAL apply status. I thought about something like
postgres_fdw doing this job with a transaction that does writes across
several nodes, that's why I got confused about this feature.
Everything goes through one channel, so the failure handling is just
simplified.

Alternately if any node replies "could not replay xact" or "could not
prepare xact" the origin node knows to ROLLBACK PREPARED. All the other
nodes see that and rollback too.

The origin node could just issue the ROLLBACK or COMMIT and the
logical replicas would just apply this change.

To really make it rock solid you also have to send the old and new values
of
a row, or have row versions, or send old row hashes. Something I also want
to have, but we can mostly get that already with REPLICA IDENTITY FULL.

On a primary key (or a unique index), the default replica identity is
enough I think.

It is of interest to me because schema changes in MM logical replication
are
more challenging awkward and restrictive without it. Optimistic conflict
resolution doesn't work well for schema changes and once the conflicting
schema changes are committed on different nodes there is no going back. So
you need your async system to have a global locking model for schema
changes
to stop conflicts arising. Or expect the user not to do anything silly /
misunderstand anything and know all the relevant system limitations and
requirements... which we all know works just great in practice. You also
need a way to ensure that schema changes don't render
committed-but-not-yet-replayed row changes from other peers nonsensical.
The
safest way is a barrier where all row changes committed on any node before
committing the schema change on the origin node must be fully replayed on
every other node, making an async MM system temporarily sync single master
(and requiring all nodes to be up and reachable). Otherwise you need a way
to figure out how to conflict-resolve incoming rows with missing columns /
added columns / changed types / renamed tables etc which is no fun and
nearly impossible in the general case.

... [show rest of quote]

That's one vision of things, FDW-like approaches would be a second,
but those are not able to pass down utility statements natively,
though this stuff can be done with the utility hook.

I think the purpose of having the GID available to the decoding output
plugin at PREPARE TRANSACTION time is that it can co-operate with a global
transaction manager that way. Each node can tell the GTM "I'm ready to
commit [X]". It is IMO not crucial since you can otherwise use a (node-id,
xid) tuple, but it'd be nice for coordinating with external systems,
simplifying inter node chatter, integrating logical deocding into bigger
systems with external transaction coordinators/arbitrators etc. It seems
pretty silly _not_ to have it really.

Well, Postgres-XC/XL save the 2PC GID for this purpose in the GTM,
this way the COMMIT/ABORT PREPARED can be issued from any nodes, and
there is a centralized conflict resolution, the latter being done with
a huge cost, causing much bottleneck in scaling performance.

Personally I don't think lack of access to the GID justifies blocking 2PC
logical decoding. It can be added separately. But it'd be nice to have
especially if it's cheap.

I think it should be added reading this thread.
--
Naman

-----
Online Business in India
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html

#201Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#199)

On Thu, Feb 25, 2021 at 12:32 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v43*

Differences from v42*

- Rebased to HEAD as @ today

- Added new patch 0006 "Tablesync early exit" as discussed here [1]

I feel we can start a separate thread for this as it can be done
independently unless there are reasons for not doing so.

- Added new patch 0007 "Fix apply worker prepare" as discussed here [2]

Few comments on v43-0007-Fix-apply-worker-empty-prepare:
================================================
1. The patch v43-0007-Fix-apply-worker-empty-prepare should be fourth
patch in series, immediately after the main-apply worker patch.
2.
apply_handle_begin_prepare
{
..
+#if 0
+ || true /* XXX - Add this line to force psf (for easier debugging) */
+#endif

Please remove such debugging hacks.

3.
+ * [Note: this is mostly copied code from apply_spooled_messages function]
+ */
+static int
+prepare_spoolfile_replay_messages(char *path, XLogRecPtr lsn)

I think we can try to unify the code in this function and
apply_spooled_messages. Basically, if we pass the sharedfileset handle
to apply_spooled_messages, then it should be possible to unify these
two functions.

4.
@@ -788,6 +897,27 @@ apply_handle_prepare(StringInfo s)
return;
}

+ if (psf_fd)
+ {
+ /*
+ * The psf_fd is meaningful only between begin_prepare and prepared.
+ * So close it now. If we had been writing any messages to the psf_fd
+ * (the spoolfile) then those will be applied later during
+ * handle_commit_prepared.
+ */
+ prepare_spoolfile_close();
+
+ /*
+ * And end the transaction that was created by begin_prepare for
+ * working with the psf buffiles.
+ */
+ Assert(IsTransactionState());
+ CommitTransactionCommand();
+
+ in_remote_transaction = false;
+ return;
+ }

Don't we need to write prepare to the spool file as well? Because, if
we do that then I think you don't need special handling in
apply_handle_commit_prepared where you are preparing the transaction
after replaying the messages from the spool file. I think in
apply_handle_commit_prepared while doing prepare, you have used
commit's lsn which is wrong and that will also be solved if you do
what I am suggesting.

5. You need to write/sync the spool file at prepare time because after
restart between prepare and commit prepared the changes can be lost
and won't be resent by the publisher assuming there are commits of
other transactions between prepare and commit prepared. For the same
reason, I am not sure if we can just rely on the in-memory hash table
for it (prepare_spoolfile_exists). Sure, if it exists and there is no
restart then it would be cheap to check in the hash table but I don't
think it is guaranteed.

--
With Regards,
Amit Kapila.

#202Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#199)
8 attachment(s)

Please find attached the latest patch set v44*

Differences from v43*

* Rebased to HEAD as @ today

* Patch 0003 "Add support for apply at prepare time"
- minor code refactor
- minor comment changes

* Patch 0006 "Tablesync early exit"
- minor comment changes
- fix whitespace

* Patch 0007 "Fix apply worker empty prepare"
- minor comment changes
- pgindent changed lots of code formatting

-----
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v44-0002-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v44-0002-Track-replication-origin-progress-for-rollbacks.patch
v44-0001-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v44-0001-Refactor-spool-file-logic-in-worker.c.patch
v44-0003-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v44-0003-Add-support-for-apply-at-prepare-time-to-built-i.patch
v44-0005-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v44-0005-Support-2PC-txn-Subscription-option.patch
v44-0004-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v44-0004-Support-2PC-txn-subscriber-tests.patch
v44-0006-Tablesync-early-exit.patchapplication/octet-stream; name=v44-0006-Tablesync-early-exit.patch
v44-0007-Fix-apply-worker-empty-prepare.patchapplication/octet-stream; name=v44-0007-Fix-apply-worker-empty-prepare.patch
v44-0008-Fix-apply-worker-empty-prepare-dev-logs.patchapplication/octet-stream; name=v44-0008-Fix-apply-worker-empty-prepare-dev-logs.patch
#203Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#201)

On Fri, Feb 26, 2021 at 9:56 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Feb 25, 2021 at 12:32 PM Peter Smith <smithpb2250@gmail.com> wrote:

5. You need to write/sync the spool file at prepare time because after
restart between prepare and commit prepared the changes can be lost
and won't be resent by the publisher assuming there are commits of
other transactions between prepare and commit prepared. For the same
reason, I am not sure if we can just rely on the in-memory hash table
for it (prepare_spoolfile_exists). Sure, if it exists and there is no
restart then it would be cheap to check in the hash table but I don't
think it is guaranteed.

As we can't rely on the hash table, I think we can get rid of it and
always check if the corresponding file exists.

Few more comments on v43-0007-Fix-apply-worker-empty-prepare
====================================================
1.
+ * So the "table_states_not_ready" list might end up having a READY
+ * state it it even though

The above sentence doesn't sound correct to me.

2.
@@ -759,6 +798,79 @@ apply_handle_begin_prepare(StringInfo s)
{
..
+ */
+ if (!am_tablesync_worker())
+ {

I think here we should have an Assert for tablesync worker because it
should never receive prepare.

3.
+ while (BusyTablesyncs())
+ {
+ elog(DEBUG1, "apply_handle_begin_prepare - waiting for all sync
workers to be DONE/READY");
+
+ process_syncing_tables(begin_data.end_lsn);

..
+ if (begin_data.end_lsn < BiggestTablesyncLSN()

In both the above places, you need to use begin_data.final_lsn because
the prepare is yet not replayed so we can't use its end_lsn for
syncup.

4.
+/*
+ * Are there any tablesyncs which have still not yet reached
SYNCDONE/READY state?
+ */
+bool
+BusyTablesyncs()

The function name is not clear enough. Can we change it to something
like AnyTableSyncInProgress?

5.
+/*
+ * Are there any tablesyncs which have still not yet reached
SYNCDONE/READY state?
+ */
+bool
+BusyTablesyncs()
{
..
+ /*
+ * XXX - When the process_syncing_tables_for_sync changes the state
+ * from SYNCDONE to READY, that change is actually written directly

In the above comment, do you mean to process_syncing_tables_for_apply
because that is where we change state to READY? And, I don't think we
need to mark this comment as XXX.

6.
+ * XXX - Is there a potential timing problem here - e.g. if signal arrives
+ * while executing this then maybe we will set table_states_valid without
+ * refetching them?
+ */
+static void
+FetchTableStates(bool *started_tx)
..

Can you explain which race condition you are worried about here which
is not possible earlier but can happen after this patch?

7.
@@ -941,6 +1162,26 @@ apply_handle_stream_prepare(StringInfo s)
elog(DEBUG1, "received prepare for streamed transaction %u", xid);

  /*
+ * Wait for all the sync workers to reach the SYNCDONE/READY state.
+ *
+ * This is same waiting logic as in appy_handle_begin_prepare function
+ * (see that function for more details about this).
+ */
+ if (!am_tablesync_worker())
+ {
+ while (BusyTablesyncs())
+ {
+ process_syncing_tables(prepare_data.end_lsn);
+
+ /* This latch is to prevent 100% CPU looping. */
+ (void) WaitLatch(MyLatch,
+ WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,
+ 1000L, WAIT_EVENT_LOGICAL_SYNC_STATE_CHANGE);
+ ResetLatch(MyLatch);
+ }
+ }

I think we need similar handling in stream_prepare as in begin_prepare
for writing to spool file because this has the same danger. But here
we need to write it xid spool file in StreamXidHash. Another thing we
need to ensure is to sync that file in stream prepare so that it can
survive restarts. Then in apply_handle_commit_prepared, after checking
for prepared spool file, we need to check the existence of xid spool
file, and if the same exists then apply messages from that file.

Again, like begin_prepare, in apply_handle_stream_prepare also we
should have an Assert for table sync worker.

I feel that 2PC and streaming case is a bit complicated to deal with.
How about, for now, we won't allow users to enable streaming if 2PC
option is enabled for Subscription. This requires some change (error
out if both streaming and 2PC options are enabled) in both
createsubscrition and altersubscription but that change should be
fairly small. If we follow this, then in apply_dispatch (for case
LOGICAL_REP_MSG_STREAM_PREPARE), we should report an ERROR "invalid
logical replication message type".

--
With Regards,
Amit Kapila.

#204osumi.takamichi@fujitsu.com
osumi.takamichi@fujitsu.com
osumi.takamichi@fujitsu.com
In reply to: Peter Smith (#199)
RE: [HACKERS] logical decoding of two-phase transactions

Hi

On Thursday, February 25, 2021 4:02 PM Peter Smith <smithpb2250@gmail.com>

Please find attached the latest patch set v43*

- Added new patch 0007 "Fix apply worker prepare" as discussed here [2]

[2]
/messages/by-id/CAA4eK1L=dhuCRvyDvrXX5wZ
gc7s1hLRD29CKCK6oaHtVCPgiFA%40mail.gmail.com

I tested the scenario that
we resulted in skipping prepared transaction data and
the replica became out of sync, which was described in [2].
And, as you said, the problem is addressed in v43.

I used twophase_snapshot.spec as a reference
for the flow (e.g. how to make a consistent snapshot
between prepare and commit prepared) and this time,
as an alternative of the SQL API(pg_create_logical_replication_slot),
I issued CREATE SUBSCRIPTION, and other than that,
I followed other flows in the spec file mainly.

I checked that the replica has the same data at the end of this test,
which means the mechanism of spoolfile works.

Best Regards,
Takamichi Osumi

#205Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#203)

On Fri, Feb 26, 2021 at 9:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

6.
+ * XXX - Is there a potential timing problem here - e.g. if signal arrives
+ * while executing this then maybe we will set table_states_valid without
+ * refetching them?
+ */
+static void
+FetchTableStates(bool *started_tx)
..

Can you explain which race condition you are worried about here which
is not possible earlier but can happen after this patch?

Yes, my question (in that XXX comment) was not about anything new for
the current patch, because this FetchTableStates function has exactly
the same logic as the HEAD code.

I was only wondering if there is any possibility that one of the
function calls (inside the if block) can end up calling
CHECK_INTERRUPTS. If that could happen, then perhaps the
table_states_valid flag could be assigned false (by the
invalidate_syncing_table_states signal handler) only to be
immediately/wrongly overwritten as table_states_valid = true in this
FetchTableStates code.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#206Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#205)

On Sat, Feb 27, 2021 at 7:31 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Fri, Feb 26, 2021 at 9:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

6.
+ * XXX - Is there a potential timing problem here - e.g. if signal arrives
+ * while executing this then maybe we will set table_states_valid without
+ * refetching them?
+ */
+static void
+FetchTableStates(bool *started_tx)
..

Can you explain which race condition you are worried about here which
is not possible earlier but can happen after this patch?

Yes, my question (in that XXX comment) was not about anything new for
the current patch, because this FetchTableStates function has exactly
the same logic as the HEAD code.

I was only wondering if there is any possibility that one of the
function calls (inside the if block) can end up calling
CHECK_INTERRUPTS. If that could happen, then perhaps the
table_states_valid flag could be assigned false (by the
invalidate_syncing_table_states signal handler) only to be
immediately/wrongly overwritten as table_states_valid = true in this
FetchTableStates code.

This is not related to CHECK_FOR_INTERRUPTS. The
invalidate_syncing_table_states() can be called only when we process
invalidation messages which we do while locking the relation via
GetSubscriptionRelationstable_open->relation_open->LockRelationOid.
After that, it won't be done in that part of the code. So, I think we
don't need this comment.

--
With Regards,
Amit Kapila.

#207Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#202)
8 attachment(s)

Please find attached the latest patch set v45*

Differences from v44*:

* Rebased to HEAD

* Addressed some feedback comments for the 0007 ("empty prepare") patch.

[ak1] #1 - TODO
[ak1] #2 - Fixed. Removed #if 0 debugging
[ak1] #3 - TODO
[ak1] #4 - Fixed. Now BEGIN_PREPARE and PREPARE msgs are spooled. The
lsns are obtained from them.
[ak1] #5 - TODO

[ak2] #1 - Fixed. Bad comment text
[ak2] #2 - Fixed. Added Assert that tablesync should never receive prepares
[ak2] #3 - Fixed. Use correct lsns for sync wait loop, and BiggestLSN checks
[ak2] #4 - Fixed. Rename Busytablesyncs to AnyTablesyncInProgress
[ak2] #5 - Fixed. Typo in comment. Removed XXX
[ak2] #6 - Fixed. Remove unwarranted XXX comment for FetchTableStates
[ak2] #7 - TODO

-----
[ak1] /messages/by-id/CAA4eK1JWNitcTrcD51vLrh2GxKxVau0EU-5UCg6K9ZNQzPcz+Q@mail.gmail.com
[ak2] /messages/by-id/CAA4eK1LodEqax+xYOYdqgY5oEM54TdjagA0zT7QjKiC0NRNv=g@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v45-0004-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v45-0004-Support-2PC-txn-subscriber-tests.patch
v45-0001-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v45-0001-Refactor-spool-file-logic-in-worker.c.patch
v45-0002-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v45-0002-Track-replication-origin-progress-for-rollbacks.patch
v45-0003-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v45-0003-Add-support-for-apply-at-prepare-time-to-built-i.patch
v45-0005-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v45-0005-Support-2PC-txn-Subscription-option.patch
v45-0006-Tablesync-early-exit.patchapplication/octet-stream; name=v45-0006-Tablesync-early-exit.patch
v45-0007-Fix-apply-worker-empty-prepare.patchapplication/octet-stream; name=v45-0007-Fix-apply-worker-empty-prepare.patch
v45-0008-Fix-apply-worker-empty-prepare-dev-logs.patchapplication/octet-stream; name=v45-0008-Fix-apply-worker-empty-prepare-dev-logs.patch
#208Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#207)
8 attachment(s)

Please find attached the latest patch set v46*

Differences from v45*

* Rebased to HEAD

* Patch v46-0003 is modified to be compatible with a recent push for
"avoiding repeated decoding of prepare" [1]https://github.com/postgres/postgres/commit/8bdb1332eb51837c15a10a972c179b84f654279e.

-----
[1]: https://github.com/postgres/postgres/commit/8bdb1332eb51837c15a10a972c179b84f654279e

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v46-0001-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v46-0001-Refactor-spool-file-logic-in-worker.c.patch
v46-0005-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v46-0005-Support-2PC-txn-Subscription-option.patch
v46-0002-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v46-0002-Track-replication-origin-progress-for-rollbacks.patch
v46-0003-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v46-0003-Add-support-for-apply-at-prepare-time-to-built-i.patch
v46-0004-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v46-0004-Support-2PC-txn-subscriber-tests.patch
v46-0006-Tablesync-early-exit.patchapplication/octet-stream; name=v46-0006-Tablesync-early-exit.patch
v46-0007-Fix-apply-worker-empty-prepare.patchapplication/octet-stream; name=v46-0007-Fix-apply-worker-empty-prepare.patch
v46-0008-Fix-apply-worker-empty-prepare-dev-logs.patchapplication/octet-stream; name=v46-0008-Fix-apply-worker-empty-prepare-dev-logs.patch
#209Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#203)

On Fri, Feb 26, 2021 at 4:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Feb 26, 2021 at 9:56 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, Feb 25, 2021 at 12:32 PM Peter Smith <smithpb2250@gmail.com> wrote:

5. You need to write/sync the spool file at prepare time because after
restart between prepare and commit prepared the changes can be lost
and won't be resent by the publisher assuming there are commits of
other transactions between prepare and commit prepared. For the same
reason, I am not sure if we can just rely on the in-memory hash table
for it (prepare_spoolfile_exists). Sure, if it exists and there is no
restart then it would be cheap to check in the hash table but I don't
think it is guaranteed.

As we can't rely on the hash table, I think we can get rid of it and
always check if the corresponding file exists.

Few more related points:
====================
1. Currently, the patch will always clean up the files if there is an
error because SharedFileSetInit registers the cleanup function.
However, we want the files to be removed only if any error happens
before flushing prepare. Once prepare is flushed, we expect the file
will be cleaned up by commit prepared. So, we need to probably call
SharedFileSetUnregister after prepare has been flushed to file.

2. The other point is that I think we need to drop these files (if
any) on Drop Subscription. Investigate if any variant of Alter needs
similar handling.

--
With Regards,
Amit Kapila.

#210Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Peter Smith (#208)
9 attachment(s)

Please find attached the latest patch set v47

Differences from v46

* Rebased to HEAD

* New patch v47-0004 incorporates a change to command
CREATE_REPLICATION_SLOT to now have an option to specify if two-phase
is to be enabled.
This patch enables two-phase by default while creating logical
replication slots.

* patch v47-0006 (prev. v46-0005) modified to enable two-phase only
when the subscription is created using that option.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v47-0001-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v47-0001-Refactor-spool-file-logic-in-worker.c.patch
v47-0002-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v47-0002-Track-replication-origin-progress-for-rollbacks.patch
v47-0004-Add-two_phase-option-to-CREATE-REPLICATION-SLOT.patchapplication/octet-stream; name=v47-0004-Add-two_phase-option-to-CREATE-REPLICATION-SLOT.patch
v47-0003-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v47-0003-Add-support-for-apply-at-prepare-time-to-built-i.patch
v47-0005-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v47-0005-Support-2PC-txn-subscriber-tests.patch
v47-0007-Tablesync-early-exit.patchapplication/octet-stream; name=v47-0007-Tablesync-early-exit.patch
v47-0006-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v47-0006-Support-2PC-txn-Subscription-option.patch
v47-0008-Fix-apply-worker-empty-prepare.patchapplication/octet-stream; name=v47-0008-Fix-apply-worker-empty-prepare.patch
v47-0009-Fix-apply-worker-empty-prepare-dev-logs.patchapplication/octet-stream; name=v47-0009-Fix-apply-worker-empty-prepare-dev-logs.patch
#211Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#207)

On Sat, Feb 27, 2021 at 8:10 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v45*

Differences from v44*:

* Rebased to HEAD

* Addressed some feedback comments for the 0007 ("empty prepare") patch.

[ak1] #1 - TODO
[ak1] #2 - Fixed. Removed #if 0 debugging
[ak1] #3 - TODO
[ak1] #4 - Fixed. Now BEGIN_PREPARE and PREPARE msgs are spooled. The
lsns are obtained from them.

@@ -774,6 +891,38 @@ apply_handle_prepare(StringInfo s)
{
LogicalRepPreparedTxnData prepare_data;

+ /*
+ * If we were using a psf spoolfile, then write the PREPARE as the final
+ * message. This prepare information will be used at commit_prepared time.
+ */
+ if (psf_fd)
+ {
+ /* Write the PREPARE info to the psf file. */
+ Assert(prepare_spoolfile_handler(LOGICAL_REP_MSG_PREPARE, s));

Why writing prepare is under Assert?

Similarly, the commit_prepared code as below still does prepare:
+ /*
+ * 2. mark as PREPARED (use prepare_data info from the psf file)
+ */
+
+ /*
+ * BeginTransactionBlock is necessary to balance the
+ * EndTransactionBlock called within the PrepareTransactionBlock
+ * below.
+ */
+ BeginTransactionBlock();
+ CommitTransactionCommand();
+
+ /*
+ * Update origin state so we can restart streaming from correct
+ * position in case of crash.
+ */
+ replorigin_session_origin_lsn = pdata.end_lsn;
+ replorigin_session_origin_timestamp = pdata.preparetime;
+
+ PrepareTransactionBlock(pdata.gid);
+ CommitTransactionCommand();
+ pgstat_report_stat(false);
+
+ store_flush_position(pdata.end_lsn);

This should automatically happen via apply_handle_prepare if we write
it to spool file.

* prepare_spoolfile_replay_messages() shouldn't handle special cases
for BEGIN_PREPARE and PREPARE messages. Those should be handled by
there corresponding apply_handle_* functions. Before processing the
messages remote_final_lsn needs to be set as commit_prepared's
commit_lsn (aka prepare_data.prepare_lsn)

--
With Regards,
Amit Kapila.

#212Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Ajin Cherian (#210)
9 attachment(s)

Please find attached the latest patch set v48*

Differences from v47* are:

* Rebased to HEAD @ today

* Patch v46-0008 "empty prepare" updated
Modified code to use File API instead of BufFile API for prepare spoolfile (psf)
Various other feedback items also addressed:
[05a] Now syncing the psf file at prepare time
[05e] Now spooling psf files should delete on error, or if already
prepared then delete only when they are committed/rollbacked
[06]: Now checking existence of psf file on disk if not in memory (in case HTAB lost after restart)
case HTAB lost after restart)
[16]: Fixed. Remove unnecessary Assert with spooled PREPARE message
[20]: Fixed. Typo "it it" in comment.

KNOWN ISSUES
* Patch 0008 has more feedback comments to be addressed

-----

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v48-0001-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v48-0001-Refactor-spool-file-logic-in-worker.c.patch
v48-0002-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v48-0002-Track-replication-origin-progress-for-rollbacks.patch
v48-0004-Add-two_phase-option-to-CREATE-REPLICATION-SLOT.patchapplication/octet-stream; name=v48-0004-Add-two_phase-option-to-CREATE-REPLICATION-SLOT.patch
v48-0003-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v48-0003-Add-support-for-apply-at-prepare-time-to-built-i.patch
v48-0005-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v48-0005-Support-2PC-txn-subscriber-tests.patch
v48-0007-Tablesync-early-exit.patchapplication/octet-stream; name=v48-0007-Tablesync-early-exit.patch
v48-0006-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v48-0006-Support-2PC-txn-Subscription-option.patch
v48-0008-Fix-apply-worker-empty-prepare.patchapplication/octet-stream; name=v48-0008-Fix-apply-worker-empty-prepare.patch
v48-0009-Fix-apply-worker-empty-prepare-dev-logs.patchapplication/octet-stream; name=v48-0009-Fix-apply-worker-empty-prepare-dev-logs.patch
#213Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Peter Smith (#212)
9 attachment(s)

On Thu, Mar 4, 2021 at 9:53 PM Peter Smith <smithpb2250@gmail.com> wrote:

[05a] Now syncing the psf file at prepare time

The patch v46-0008 does not handle spooling of streaming prepare if
the Subscription is configured for both two-phase and streaming.
I feel that it would be best if we don't support both two-phase and
streaming together in a subscription in this release.
Probably a future release could handle this. So, changing the patch to
not allow streaming and two-phase together.
This new patch v49 has the following changes.

* Don't support creating a subscription with both streaming and
two-phase enabled.
* Don't support altering a subscription enabling streaming if it was
created with two-phase enabled.
* Remove stream_prepare callback as a "required" callback, make it an
optional callback and remove all code related to stream_prepare in the
pgoutput plugin as well as in worker.c

Also fixed
* Don't support the alter of subscription setting two-phase. Toggling
of two-phase mode using the alter command on the subscription can
cause transactions to be missed and result in an inconsistent replica.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v49-0001-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v49-0001-Refactor-spool-file-logic-in-worker.c.patch
v49-0005-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v49-0005-Support-2PC-txn-subscriber-tests.patch
v49-0004-Add-two_phase-option-to-CREATE-REPLICATION-SLOT.patchapplication/octet-stream; name=v49-0004-Add-two_phase-option-to-CREATE-REPLICATION-SLOT.patch
v49-0002-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v49-0002-Track-replication-origin-progress-for-rollbacks.patch
v49-0003-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v49-0003-Add-support-for-apply-at-prepare-time-to-built-i.patch
v49-0006-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v49-0006-Support-2PC-txn-Subscription-option.patch
v49-0007-Tablesync-early-exit.patchapplication/octet-stream; name=v49-0007-Tablesync-early-exit.patch
v49-0008-Fix-apply-worker-empty-prepare.patchapplication/octet-stream; name=v49-0008-Fix-apply-worker-empty-prepare.patch
v49-0009-Fix-apply-worker-empty-prepare-dev-logs.patchapplication/octet-stream; name=v49-0009-Fix-apply-worker-empty-prepare-dev-logs.patch
#214vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Ajin Cherian (#213)

On Fri, Mar 5, 2021 at 12:21 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Thu, Mar 4, 2021 at 9:53 PM Peter Smith <smithpb2250@gmail.com> wrote:

[05a] Now syncing the psf file at prepare time

The patch v46-0008 does not handle spooling of streaming prepare if
the Subscription is configured for both two-phase and streaming.
I feel that it would be best if we don't support both two-phase and
streaming together in a subscription in this release.
Probably a future release could handle this. So, changing the patch to
not allow streaming and two-phase together.
This new patch v49 has the following changes.

* Don't support creating a subscription with both streaming and
two-phase enabled.
* Don't support altering a subscription enabling streaming if it was
created with two-phase enabled.
* Remove stream_prepare callback as a "required" callback, make it an
optional callback and remove all code related to stream_prepare in the
pgoutput plugin as well as in worker.c

Also fixed
* Don't support the alter of subscription setting two-phase. Toggling
of two-phase mode using the alter command on the subscription can
cause transactions to be missed and result in an inconsistent replica.

Thanks for the updated patch.
Few minor comments:

I'm not sure if we plan to change this workaround, if we are not
planning to change this workaround. We can reword the comments
suitably. We generally don't use workaround in our comments.
+               /*
+                * Workaround Part 1 of 2:
+                *
+                * Make sure every tablesync has reached at least SYNCDONE state
+                * before letting the apply worker proceed.
+                */
+               elog(DEBUG1,
+                        "apply_handle_begin_prepare, end_lsn = %X/%X,
final_lsn = %X/%X, lstate_lsn = %X/%X",
+                        LSN_FORMAT_ARGS(begin_data.end_lsn),
+                        LSN_FORMAT_ARGS(begin_data.final_lsn),
+                        LSN_FORMAT_ARGS(MyLogicalRepWorker->relstate_lsn));
+

We should include two_phase in tab completion (tab-complete.c file
psql_completion(const char *text, int start, int end) function) :
postgres=# create subscription sub1 connection 'port=5441
dbname=postgres' publication pub1 with (
CONNECT COPY_DATA CREATE_SLOT ENABLED
SLOT_NAME SYNCHRONOUS_COMMIT

+
+         <para>
+          It is not allowed to combine <literal>streaming</literal> set to
+          <literal>true</literal> and <literal>two_phase</literal> set to
+          <literal>true</literal>.
+         </para>
+
+        </listitem>
+       </varlistentry>
+       <varlistentry>
+        <term><literal>two_phase</literal> (<type>boolean</type>)</term>
+        <listitem>
+         <para>
+          Specifies whether two-phase commit is enabled for this subscription.
+          The default is <literal>false</literal>.
+         </para>
+
+         <para>
+          When two-phase commit is enabled then the decoded
transactions are sent
+          to the subscriber on the PREPARE TRANSACTION. By default,
the transaction
+          preapred on publisher is decoded as normal transaction at commit.
+         </para>
+
+         <para>
+          It is not allowed to combine <literal>two_phase</literal> set to
+          <literal>true</literal> and <literal>streaming</literal> set to
+          <literal>true</literal>.
+         </para>

It is not allowed to combine streaming set to true and two_phase set to true.
Should this be:
streaming option is not supported along with two_phase option.

Similarly here too:
It is not allowed to combine two_phase set to true and streaming set to true.
Should this be:
two_phase option is not supported along with streaming option.

Few indentation issues are present, we can run pgindent:
+extern void logicalrep_write_prepare(StringInfo out, ReorderBufferTXN *txn,
+
  XLogRecPtr prepare_lsn);
+extern void logicalrep_read_prepare(StringInfo in,
+
 LogicalRepPreparedTxnData *prepare_data);
+extern void logicalrep_write_commit_prepared(StringInfo out,
ReorderBufferTXN* txn,
+
                  XLogRecPtr commit_lsn);

ReorderBufferTXN * should be ReorderBufferTXN*

Line exceeds 80 chars:
+               /*
+                * Now that we replayed the psf it is no longer
needed. Just delete it.
+                */
+               prepare_spoolfile_delete(psfpath);
There is a typo, preapred should be prepared.
+         <para>
+          When two-phase commit is enabled then the decoded
transactions are sent
+          to the subscriber on the PREPARE TRANSACTION. By default,
the transaction
+          preapred on publisher is decoded as normal transaction at commit.
+         </para>

Regards,
Vignesh

#215Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#214)
9 attachment(s)

Please find attached the latest patch set v50*

Differences from v49* are:

* Rebased to HEAD @ today

* Patch 0008 "empty prepare" is updated to address the following
feedback comments:

From Amit @ 2021-03-03 [ak]
- (18) Fixed. Removed special cases in
prepare_spoolfile_replay_messages. Just dispatch all messages.
- (19) Fixed. Before replay the psf remote_final_lsn needs to be set
as commit_prepared's commit_lsn

From Vignesh @ 2021-03-05 [vc]
- (21) Fixed. Reworded comment to not refer to the fix as a "workaround".
- (25) Fixed. A comment line exceeds 80 chars.

-----
[ak] /messages/by-id/CAA4eK1KhfzCYDmv17beC6wOX_5pL-MBNYBpMiLgxrdgF1yBYng@mail.gmail.com
[vc] /messages/by-id/CALDaNm1rRG2EUus+mFrqRzEshZwJZtxja0rn_n3qXGAygODfOA@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v50-0001-Refactor-spool-file-logic-in-worker.c.patchapplication/octet-stream; name=v50-0001-Refactor-spool-file-logic-in-worker.c.patch
v50-0002-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v50-0002-Track-replication-origin-progress-for-rollbacks.patch
v50-0004-Add-two_phase-option-to-CREATE-REPLICATION-SLOT.patchapplication/octet-stream; name=v50-0004-Add-two_phase-option-to-CREATE-REPLICATION-SLOT.patch
v50-0003-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v50-0003-Add-support-for-apply-at-prepare-time-to-built-i.patch
v50-0005-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v50-0005-Support-2PC-txn-subscriber-tests.patch
v50-0006-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v50-0006-Support-2PC-txn-Subscription-option.patch
v50-0007-Tablesync-early-exit.patchapplication/octet-stream; name=v50-0007-Tablesync-early-exit.patch
v50-0008-Fix-apply-worker-empty-prepare.patchapplication/octet-stream; name=v50-0008-Fix-apply-worker-empty-prepare.patch
v50-0009-Fix-apply-worker-empty-prepare-dev-logs.patchapplication/octet-stream; name=v50-0009-Fix-apply-worker-empty-prepare-dev-logs.patch
#216osumi.takamichi@fujitsu.com
osumi.takamichi@fujitsu.com
osumi.takamichi@fujitsu.com
In reply to: Peter Smith (#215)
RE: [HACKERS] logical decoding of two-phase transactions

Hi

On Saturday, March 6, 2021 10:49 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v50*

When I read throught the patch set, I found there is a
wierd errmsg in apply_handle_begin_prepare(), which seems a mistake.

File : v50-0003-Add-support-for-apply-at-prepare-time-to-built-i.patch

+        * The gid must not already be prepared.
+        */
+       if (LookupGXact(begin_data.gid, begin_data.end_lsn, begin_data.committime))
+               ereport(ERROR,
+                               (errcode(ERRCODE_DUPLICATE_OBJECT),
+                               errmsg("transaction?identifier?\"%s\"?is?already?in?use",
+                                          begin_data.gid)));

Please fix this in a next update.

Best Regards,
Takamichi Osumi

#217Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#215)

On Sat, Mar 6, 2021 at 7:19 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v50*

Few comments on the latest patch series:
=================================
1. I think we can extract the changes to make streaming optional with
2PC and infact you can start a separate thread for it.

2. I think we can get rid of table-sync early exit patch
(v50-0007-Tablesync-early-exit) as we have kept two_phase off from
tablesync worker. I agree that has its own independent value but it is
not required for this patch series.

3. Now, that we are not supporting streaming with two_pc option, do we
really need the first patch
(v50-0001-Refactor-spool-file-logic-in-worker.c)? I suggest to get rid
of the same unless it is really required. If we decide to remove this
patch, then remove the reference to apply_spooled_messages from 0008
patch.

v50-0005-Support-2PC-txn-subscriber-tests
4.
+###############################
+# Test cases involving DDL.
+###############################
+
+# TODO This can be added after we add functionality to replicate DDL
changes to subscriber.

We can remove this from the patch.

v50-0006-Support-2PC-txn-Subscription-option
5.
- /* Binary mode and streaming are only supported in v14 and higher */
+ /* Binary mode and streaming and Two phase commit are only supported
in v14 and higher */

It looks odd that only one of the option starts with capital letter
/Two/two. I suggest to two_phase.

v50-0008-Fix-apply-worker-empty-prepare
6. In 0008, the commit message lines are too long, it is difficult to
read those. Try to keep them 75 char long, this is generally what I
use but you can try something else if you want but not as long as you
have kept in this patch.

7.
+ /*
+ * A Problem:
+ *
..
Let's call this the "empty prepare" problem.
+ *
+ * The following code has a 2-part fix for that scenario.

No need to describe it in terms of problem and fix. You can say
something like: "This can lead to "empty prepare". We avoid this by
...."

--
With Regards,
Amit Kapila.

#218Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#217)
7 attachment(s)

Please find attached the latest patch set v51*

Differences from v50* are:

* Rebased to HEAD @ today

* Addresses following feedback comments:

From Osunmi-san @ 2021-03-06 [ot]
- (27) Fixed. Patch 0003. Remove weird chars from the error message.

From Amit @ 2021-03-06 [ak]
- (29) Removed patch 0007 "tablesync early exit" from this patch set.
I started a new thread [early-exit] for this.
- (30) Removed patch 0001 "refactor spool file logic" from this patch set.
- (31) Fixed. Patch 0005 removed TODO from test code.
- (32) Fixed. Patch 0006 comment typo.
- (33) Fixed. Patch 0008 commit message lines were too long
- (34) Fixed. Patch 0008 comment reworded avoiding words like
"problem" and "fix"

-----
[ot] /messages/by-id/OSBPR01MB4888636EB9421C930FB39A19ED959@OSBPR01MB4888.jpnprd01.prod.outlook.com
[ak] /messages/by-id/CAA4eK1Jxu-3qxtkfA_dKoquQgGZVcB+k9_-yT5=9GDEW84TF+A@mail.gmail.com
[early-exit] /messages/by-id/CAHut+Pt39PbQs0SxT9RMM89aYiZoQ0Kw46YZSkKZwK8z5HOr3g@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v51-0002-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v51-0002-Add-support-for-apply-at-prepare-time-to-built-i.patch
v51-0001-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v51-0001-Track-replication-origin-progress-for-rollbacks.patch
v51-0003-Add-two_phase-option-to-CREATE-REPLICATION-SLOT.patchapplication/octet-stream; name=v51-0003-Add-two_phase-option-to-CREATE-REPLICATION-SLOT.patch
v51-0005-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v51-0005-Support-2PC-txn-Subscription-option.patch
v51-0004-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v51-0004-Support-2PC-txn-subscriber-tests.patch
v51-0006-Fix-apply-worker-empty-prepare.patchapplication/octet-stream; name=v51-0006-Fix-apply-worker-empty-prepare.patch
v51-0007-Fix-apply-worker-empty-prepare-dev-logs.patchapplication/octet-stream; name=v51-0007-Fix-apply-worker-empty-prepare-dev-logs.patch
#219Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#218)

On Sun, Mar 7, 2021 at 7:35 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v51*

Few more comments on v51-0006-Fix-apply-worker-empty-prepare:
======================================================
1.
+/*
+ * A Prepare spoolfile hash entry. We create this entry in the
psf_hash. This is
+ * for maintaining a mapping between the name of the prepared
spoolfile, and the
+ * corresponding fileset handles of same.
+ */
+typedef struct PsfHashEntry
+{
+ char name[MAXPGPATH]; /* Hash key --- must be first */
+ bool allow_delete; /* ok to delete? */
+} PsfHashEntry;
+

IIUC, this has table is used for two purposes in the patch (a) to
check for existence of prepare spool file where we anyway to check it
on disk if not found in the hash table. (b) to allow the prepare spool
file to be removed on proc_exit.

I think we don't need the optimization provided by (a) because it will
be too rare a case to deserve any optimization, we might write a
comment in prepare_spoolfile_exists to indicate such an optimization.
For (b), we can use a simple list to track files to be removed on
proc_exit something like we do in CreateLockFile. I think avoiding
hash table usage will reduce the code and chances of bugs in this
area. It won't be easy to write a lot of automated tests to test this
functionality so it is better to avoid minor optimizations at this
stage.

2.
+ /*
+ * Replay/dispatch the spooled messages (including lastly, the PREPARE
+ * message).
+ */
+
+ ensure_transaction();

The part of the comment: "including lastly, the PREPARE message"
doesn't seem to fit here because in this part of the code you are not
doing anything special for Prepare message. Neither are we in someway
verifying that prepared message is replayed.

3.
+ /* create or find the prepare spoolfile entry in the psf_hash */
+ hentry = (PsfHashEntry *) hash_search(psf_hash,
+   path,
+   HASH_ENTER | HASH_FIND,
+   &found);
+
+ if (!found)
+ {
+ elog(DEBUG1, "Not found file \"%s\". Create it.", path);
+ psf_cur.vfd = PathNameOpenFile(path, O_RDWR | O_CREAT | O_TRUNC | PG_BINARY);
+ if (psf_cur.vfd < 0)
+ {
+ ereport(ERROR,
+ (errcode_for_file_access(),
+ errmsg("could not create file \"%s\": %m", path)));
+ }
+ memcpy(psf_cur.name, path, sizeof(psf_cur.name));
+ psf_cur.cur_offset = 0;
+ hentry->allow_delete = true;
+ }
+ else
+ {
+ /*
+ * Open the file and seek to the beginning because we always want to
+ * create/overwrite this file.
+ */
+ elog(DEBUG1, "Found file \"%s\". Overwrite it.", path);
+ psf_cur.vfd = PathNameOpenFile(path, O_RDWR | O_CREAT | O_TRUNC | PG_BINARY);
+ if (psf_cur.vfd < 0)
+ {
+ ereport(ERROR,
+ (errcode_for_file_access(),
+ errmsg("could not open file \"%s\": %m", path)));
+ }
+ memcpy(psf_cur.name, path, sizeof(psf_cur.name));
+ psf_cur.cur_offset = 0;
+ hentry->allow_delete = true;
+ }

Is it sufficient to check if the prepared file exists in hash_table?
Isn't it possible that after restart even though the file exists but
it won't be there in the hash table? I guess this will change if you
address the first comment.

4.
@@ -754,9 +889,58 @@ apply_handle_prepare(StringInfo s)
{
LogicalRepPreparedTxnData prepare_data;

+ /*
+ * If we were using a psf spoolfile, then write the PREPARE as the final
+ * message. This prepare information will be used at commit_prepared time.
+ */
+ if (psf_cur.is_spooling)
+ {
+ PsfHashEntry *hentry;
+
+ /* Write the PREPARE info to the psf file. */
+ prepare_spoolfile_handler(LOGICAL_REP_MSG_PREPARE, s);
+
+ /*
+ * Flush the spoolfile, so changes can survive a restart.
+ */
+ FileSync(psf_cur.vfd, WAIT_EVENT_DATA_FILE_SYNC);

I think in an ideal world we only need to flush the spool file(s) when
the replication origin is advanced because at that stage after the
restart we won't get this data again. So, now, if the publisher sends
the data again after restart because the origin on the subscriber was
not moved past this prepare, you need to overwrite the existing file
which the patch is already doing but I think it is better to add some
comments explaining this.

5. Can you please test some subtransaction cases (by having savepoints
for the prepared transaction) which pass through the spool file logic?
Something like below with maybe more savepoints.
postgres=# begin;
BEGIN
postgres=*# insert into t1 values(1);
INSERT 0 1
postgres=*# savepoint s1;
SAVEPOINT
postgres=*# insert into t1 values(2);
INSERT 0 1
postgres=*# prepare transaction 'foo';
PREPARE TRANSACTION

I don't see any obvious problem in such cases but it is better to test.

6. Patch 0003 and 0006 can be merged to patch 0002 as that will enable
complete functionality for 0002. I understand that you have kept them
for easier review but I guess at this stage it is better to merge them
so that the complete functionality can be reviewed.

--
With Regards,
Amit Kapila.

#220Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#218)
5 attachment(s)

Please find attached the latest patch set v52*

Differences from v51* are:

* Rebased to HEAD @ today

* No code changes; only a merging of the v51 patches as requested [ak].

v52-0001 <== v51-0001 "track replication origin"
v52-0002 <== v51-0002 "add support for apply at prepare time" +
v51-0003 "add two phase option for create slot" + v51-0006 "fix apply
worker empty prepare"
v52-0003 <== v51-0004 "2pc tests"
v52-0004 <== v51-0005 "Subscription option"
v52-0005 <== v51-0007 "empty prepare extra logging"

-----
[ak] /messages/by-id/CAA4eK1+dO07RrQwfHAK5jDP9qiXik4-MVzy+coEG09shWTJFGg@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Show quoted text

On Sun, Mar 7, 2021 at 1:04 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v51*

Differences from v50* are:

* Rebased to HEAD @ today

* Addresses following feedback comments:

From Osunmi-san @ 2021-03-06 [ot]
- (27) Fixed. Patch 0003. Remove weird chars from the error message.

From Amit @ 2021-03-06 [ak]
- (29) Removed patch 0007 "tablesync early exit" from this patch set.
I started a new thread [early-exit] for this.
- (30) Removed patch 0001 "refactor spool file logic" from this patch set.
- (31) Fixed. Patch 0005 removed TODO from test code.
- (32) Fixed. Patch 0006 comment typo.
- (33) Fixed. Patch 0008 commit message lines were too long
- (34) Fixed. Patch 0008 comment reworded avoiding words like
"problem" and "fix"

-----
[ot] /messages/by-id/OSBPR01MB4888636EB9421C930FB39A19ED959@OSBPR01MB4888.jpnprd01.prod.outlook.com
[ak] /messages/by-id/CAA4eK1Jxu-3qxtkfA_dKoquQgGZVcB+k9_-yT5=9GDEW84TF+A@mail.gmail.com
[early-exit] /messages/by-id/CAHut+Pt39PbQs0SxT9RMM89aYiZoQ0Kw46YZSkKZwK8z5HOr3g@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v52-0001-Track-replication-origin-progress-for-rollbacks.patchapplication/octet-stream; name=v52-0001-Track-replication-origin-progress-for-rollbacks.patch
v52-0003-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v52-0003-Support-2PC-txn-subscriber-tests.patch
v52-0002-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v52-0002-Add-support-for-apply-at-prepare-time-to-built-i.patch
v52-0005-Fix-apply-worker-empty-prepare-dev-logs.patchapplication/octet-stream; name=v52-0005-Fix-apply-worker-empty-prepare-dev-logs.patch
v52-0004-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v52-0004-Support-2PC-txn-Subscription-option.patch
#221Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#219)

On Sun, Mar 7, 2021 at 3:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Sun, Mar 7, 2021 at 7:35 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v51*

Few more comments on v51-0006-Fix-apply-worker-empty-prepare:
======================================================
1.
+/*
+ * A Prepare spoolfile hash entry. We create this entry in the
psf_hash. This is
+ * for maintaining a mapping between the name of the prepared
spoolfile, and the
+ * corresponding fileset handles of same.
+ */
+typedef struct PsfHashEntry
+{
+ char name[MAXPGPATH]; /* Hash key --- must be first */
+ bool allow_delete; /* ok to delete? */
+} PsfHashEntry;
+

IIUC, this has table is used for two purposes in the patch (a) to
check for existence of prepare spool file where we anyway to check it
on disk if not found in the hash table. (b) to allow the prepare spool
file to be removed on proc_exit.

I think we don't need the optimization provided by (a) because it will
be too rare a case to deserve any optimization, we might write a
comment in prepare_spoolfile_exists to indicate such an optimization.
For (b), we can use a simple list to track files to be removed on
proc_exit something like we do in CreateLockFile. I think avoiding
hash table usage will reduce the code and chances of bugs in this
area. It won't be easy to write a lot of automated tests to test this
functionality so it is better to avoid minor optimizations at this
stage.

Our data structure psf_hash also needs to be able to discover the
entry for a specific spool file and remove it. e.g. anything marked as
"allow_delete = false" (during prepare) must be able to be re-found
and removed from that structure at commit_prepared or
rollback_prepared time.

Looking at CreateLockFile code, I don't see that it is ever deleting
entries from its "lock_files" list on-the-fly, so it's not really a
fair comparison to say just use a List like CreateLockFile.

So, even though we (currently) only have a single data member
"allow_delete", I think the requirement to do a key lookup/delete
makes a HTAB a more appropriate data structure than a List.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#222Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#221)

On Mon, Mar 8, 2021 at 10:04 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Sun, Mar 7, 2021 at 3:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Sun, Mar 7, 2021 at 7:35 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v51*

Few more comments on v51-0006-Fix-apply-worker-empty-prepare:
======================================================
1.
+/*
+ * A Prepare spoolfile hash entry. We create this entry in the
psf_hash. This is
+ * for maintaining a mapping between the name of the prepared
spoolfile, and the
+ * corresponding fileset handles of same.
+ */
+typedef struct PsfHashEntry
+{
+ char name[MAXPGPATH]; /* Hash key --- must be first */
+ bool allow_delete; /* ok to delete? */
+} PsfHashEntry;
+

IIUC, this has table is used for two purposes in the patch (a) to
check for existence of prepare spool file where we anyway to check it
on disk if not found in the hash table. (b) to allow the prepare spool
file to be removed on proc_exit.

I think we don't need the optimization provided by (a) because it will
be too rare a case to deserve any optimization, we might write a
comment in prepare_spoolfile_exists to indicate such an optimization.
For (b), we can use a simple list to track files to be removed on
proc_exit something like we do in CreateLockFile. I think avoiding
hash table usage will reduce the code and chances of bugs in this
area. It won't be easy to write a lot of automated tests to test this
functionality so it is better to avoid minor optimizations at this
stage.

Our data structure psf_hash also needs to be able to discover the
entry for a specific spool file and remove it. e.g. anything marked as
"allow_delete = false" (during prepare) must be able to be re-found
and removed from that structure at commit_prepared or
rollback_prepared time.

But, I think that is not reliable because after restart the entry
might not be present and we anyway need to check the presence of the
file on disk. Actually, you don't need any manipulation with list or
hash at commit_prepared or rollback_prepared, we should just remove
the entry for it at the prepare time and there should be an assert if
we find that entry in the in-memory structure.

Looking at CreateLockFile code, I don't see that it is ever deleting
entries from its "lock_files" list on-the-fly, so it's not really a
fair comparison to say just use a List like CreateLockFile.

Sure, but you can additionally traverse the list and find the required entry.

So, even though we (currently) only have a single data member
"allow_delete", I think the requirement to do a key lookup/delete
makes a HTAB a more appropriate data structure than a List.

Actually, that member is also not required at all because you just
need it till the time of prepare and then remove it.

--
With Regards,
Amit Kapila.

#223vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#220)

On Mon, Mar 8, 2021 at 7:17 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v52*

Few comments:

+logicalrep_read_begin_prepare(StringInfo in,
LogicalRepBeginPrepareData *begin_data)
+{
+       /* read fields */
+       begin_data->final_lsn = pq_getmsgint64(in);
+       if (begin_data->final_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "final_lsn not set in begin message");
+       begin_data->end_lsn = pq_getmsgint64(in);
+       if (begin_data->end_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "end_lsn not set in begin message");
+       begin_data->committime = pq_getmsgint64(in);
+       begin_data->xid = pq_getmsgint(in, 4);
+
+       /* read gid (copy it into a pre-allocated buffer) */
+       strcpy(begin_data->gid, pq_getmsgstring(in));
+}
In logicalrep_read_begin_prepare we validate final_lsn & end_lsn. But
this validation is not done in logicalrep_read_commit_prepared and
logicalrep_read_rollback_prepared. Should we keep it consistent?

@@ -170,5 +237,4 @@ extern void
logicalrep_write_stream_abort(StringInfo out, TransactionId xid,

TransactionId subxid);
extern void logicalrep_read_stream_abort(StringInfo in, TransactionId *xid,

TransactionId *subxid);
-
#endif /* LOGICAL_PROTO_H */
This change is not required.

@@ -242,15 +244,16 @@ create_replication_slot:
                                        $$ = (Node *) cmd;
                                }
                        /* CREATE_REPLICATION_SLOT slot TEMPORARY
LOGICAL plugin */
-                       | K_CREATE_REPLICATION_SLOT IDENT
opt_temporary K_LOGICAL IDENT create_slot_opt_list
+                       | K_CREATE_REPLICATION_SLOT IDENT
opt_temporary opt_two_phase K_LOGICAL IDENT create_slot_opt_list
                                {
                                        CreateReplicationSlotCmd *cmd;
                                        cmd =
makeNode(CreateReplicationSlotCmd);
                                        cmd->kind = REPLICATION_KIND_LOGICAL;
                                        cmd->slotname = $2;
                                        cmd->temporary = $3;
-                                       cmd->plugin = $5;
-                                       cmd->options = $6;
+                                       cmd->two_phase = $4;
+                                       cmd->plugin = $6;
+                                       cmd->options = $7;
                                        $$ = (Node *) cmd;
                                }
Should we document two_phase in the below section:
CREATE_REPLICATION_SLOT slot_name [ TEMPORARY ] { PHYSICAL [
RESERVE_WAL ] | LOGICAL output_plugin [ EXPORT_SNAPSHOT |
NOEXPORT_SNAPSHOT | USE_SNAPSHOT ] }
Create a physical or logical replication slot. See Section 27.2.6 for
more about replication slots.
+               while (AnyTablesyncInProgress())
+               {
+                       process_syncing_tables(begin_data.final_lsn);
+
+                       /* This latch is to prevent 100% CPU looping. */
+                       (void) WaitLatch(MyLatch,
+                                                        WL_LATCH_SET
| WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,
+                                                        1000L,
WAIT_EVENT_LOGICAL_SYNC_STATE_CHANGE);
+                       ResetLatch(MyLatch);
+               }
Should we have CHECK_FOR_INTERRUPTS inside the while loop?
+               if (begin_data.final_lsn < BiggestTablesyncLSN())
+               {
+                       char            psfpath[MAXPGPATH];
+
+                       /*
+                        * Create the spoolfile.
+                        */
+                       prepare_spoolfile_name(psfpath, sizeof(psfpath),
+
MyLogicalRepWorker->subid, begin_data.gid);
+                       prepare_spoolfile_create(psfpath);
We can make this as a single line comment.
+       if (!found)
+       {
+               elog(DEBUG1, "Not found file \"%s\". Create it.", path);
+               psf_cur.vfd = PathNameOpenFile(path, O_RDWR | O_CREAT
| O_TRUNC | PG_BINARY);
+               if (psf_cur.vfd < 0)
+               {
+                       ereport(ERROR,
+                                       (errcode_for_file_access(),
+                                        errmsg("could not create file
\"%s\": %m", path)));
+               }
+               memcpy(psf_cur.name, path, sizeof(psf_cur.name));
+               psf_cur.cur_offset = 0;
+               hentry->allow_delete = true;
+       }
+       else
+       {
+               /*
+                * Open the file and seek to the beginning because we
always want to
+                * create/overwrite this file.
+                */
+               elog(DEBUG1, "Found file \"%s\". Overwrite it.", path);
+               psf_cur.vfd = PathNameOpenFile(path, O_RDWR | O_CREAT
| O_TRUNC | PG_BINARY);
+               if (psf_cur.vfd < 0)
+               {
+                       ereport(ERROR,
+                                       (errcode_for_file_access(),
+                                        errmsg("could not open file
\"%s\": %m", path)));
+               }
+               memcpy(psf_cur.name, path, sizeof(psf_cur.name));
+               psf_cur.cur_offset = 0;
+               hentry->allow_delete = true;
+       }

Except the elog message the rest of the code is the same in both if
and else, we can move the common code outside.

        LOGICAL_REP_MSG_TYPE = 'Y',
+       LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
+       LOGICAL_REP_MSG_PREPARE = 'P',
+       LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
+       LOGICAL_REP_MSG_ROLLBACK_PREPARED = 'r',
        LOGICAL_REP_MSG_STREAM_START = 'S',
        LOGICAL_REP_MSG_STREAM_END = 'E',
        LOGICAL_REP_MSG_STREAM_COMMIT = 'c',
-       LOGICAL_REP_MSG_STREAM_ABORT = 'A'
+       LOGICAL_REP_MSG_STREAM_ABORT = 'A',
+       LOGICAL_REP_MSG_STREAM_PREPARE = 'p'
 } LogicalRepMsgType;
As we start adding more and more features, we will have to start
adding more message types, using meaningful characters might become
difficult. Should we start using numeric instead for the new feature
getting added?

Regards.
Vignesh

#224Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: vignesh C (#214)
4 attachment(s)

On Fri, Mar 5, 2021 at 9:25 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the updated patch.
Few minor comments:

We should include two_phase in tab completion (tab-complete.c file
psql_completion(const char *text, int start, int end) function) :
postgres=# create subscription sub1 connection 'port=5441
dbname=postgres' publication pub1 with (
CONNECT COPY_DATA CREATE_SLOT ENABLED
SLOT_NAME SYNCHRONOUS_COMMIT

Updated.

+
+         <para>
+          It is not allowed to combine <literal>streaming</literal> set to
+          <literal>true</literal> and <literal>two_phase</literal> set to
+          <literal>true</literal>.
+         </para>
+
+        </listitem>
+       </varlistentry>
+       <varlistentry>
+        <term><literal>two_phase</literal> (<type>boolean</type>)</term>
+        <listitem>
+         <para>
+          Specifies whether two-phase commit is enabled for this subscription.
+          The default is <literal>false</literal>.
+         </para>
+
+         <para>
+          When two-phase commit is enabled then the decoded
transactions are sent
+          to the subscriber on the PREPARE TRANSACTION. By default,
the transaction
+          preapred on publisher is decoded as normal transaction at commit.
+         </para>
+
+         <para>
+          It is not allowed to combine <literal>two_phase</literal> set to
+          <literal>true</literal> and <literal>streaming</literal> set to
+          <literal>true</literal>.
+         </para>

It is not allowed to combine streaming set to true and two_phase set to true.
Should this be:
streaming option is not supported along with two_phase option.

Similarly here too:
It is not allowed to combine two_phase set to true and streaming set to true.
Should this be:
two_phase option is not supported along with streaming option.

Reworded this with a small change.

Few indentation issues are present, we can run pgindent:
+extern void logicalrep_write_prepare(StringInfo out, ReorderBufferTXN *txn,
+
XLogRecPtr prepare_lsn);
+extern void logicalrep_read_prepare(StringInfo in,
+
LogicalRepPreparedTxnData *prepare_data);
+extern void logicalrep_write_commit_prepared(StringInfo out,
ReorderBufferTXN* txn,
+
XLogRecPtr commit_lsn);

ReorderBufferTXN * should be ReorderBufferTXN*

Changed accordingly.

Created new patch v53:
* Rebased to HEAD (this resulted in removing patch 0001) and reduced
patch-set to 4 patches.
* Removed the changes for making "stream prepare" optional from
required. Will create a new patch and thread for this.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v53-0001-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v53-0001-Add-support-for-apply-at-prepare-time-to-built-i.patch
v53-0004-Fix-apply-worker-empty-prepare-dev-logs.patchapplication/octet-stream; name=v53-0004-Fix-apply-worker-empty-prepare-dev-logs.patch
v53-0002-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v53-0002-Support-2PC-txn-subscriber-tests.patch
v53-0003-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v53-0003-Support-2PC-txn-Subscription-option.patch
#225Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#222)

On Mon, Mar 8, 2021 at 4:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Mar 8, 2021 at 10:04 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Sun, Mar 7, 2021 at 3:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Sun, Mar 7, 2021 at 7:35 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v51*

Few more comments on v51-0006-Fix-apply-worker-empty-prepare:
======================================================
1.
+/*
+ * A Prepare spoolfile hash entry. We create this entry in the
psf_hash. This is
+ * for maintaining a mapping between the name of the prepared
spoolfile, and the
+ * corresponding fileset handles of same.
+ */
+typedef struct PsfHashEntry
+{
+ char name[MAXPGPATH]; /* Hash key --- must be first */
+ bool allow_delete; /* ok to delete? */
+} PsfHashEntry;
+

IIUC, this has table is used for two purposes in the patch (a) to
check for existence of prepare spool file where we anyway to check it
on disk if not found in the hash table. (b) to allow the prepare spool
file to be removed on proc_exit.

I think we don't need the optimization provided by (a) because it will
be too rare a case to deserve any optimization, we might write a
comment in prepare_spoolfile_exists to indicate such an optimization.
For (b), we can use a simple list to track files to be removed on
proc_exit something like we do in CreateLockFile. I think avoiding
hash table usage will reduce the code and chances of bugs in this
area. It won't be easy to write a lot of automated tests to test this
functionality so it is better to avoid minor optimizations at this
stage.

Our data structure psf_hash also needs to be able to discover the
entry for a specific spool file and remove it. e.g. anything marked as
"allow_delete = false" (during prepare) must be able to be re-found
and removed from that structure at commit_prepared or
rollback_prepared time.

But, I think that is not reliable because after restart the entry
might not be present and we anyway need to check the presence of the
file on disk. Actually, you don't need any manipulation with list or
hash at commit_prepared or rollback_prepared, we should just remove
the entry for it at the prepare time and there should be an assert if
we find that entry in the in-memory structure.

Looking at CreateLockFile code, I don't see that it is ever deleting
entries from its "lock_files" list on-the-fly, so it's not really a
fair comparison to say just use a List like CreateLockFile.

Sure, but you can additionally traverse the list and find the required entry.

So, even though we (currently) only have a single data member
"allow_delete", I think the requirement to do a key lookup/delete
makes a HTAB a more appropriate data structure than a List.

Actually, that member is also not required at all because you just
need it till the time of prepare and then remove it.

OK, I plan to change like this.
- Now the whole hash simply means "delete-on-exit". If the key (aka
filename) exists, delete that file on exit. If not don't
- Remove the "allow_delete" member (as you say it can be redundant
using the new interpretation above)
- the *only* code that CREATES a key will be when
prepare_spoolfile_create is called from begin_prepare.
- at apply_handle_prepare time the key is REMOVED (so that file will
not be deleted in case of a restart / error before commit/rollback)
- at apply_handle_commit_prepared Assert(if key is found) because
prepare should have removed it; the psf file is always deleted.
- at apply_handle_rollback_prepared Assert(if key is found) because
prepare should have removed it; the psf file is always deleted.
- at proc-exit time, iterate and delete all the filenames (aka keys).

-----
Kind Regards,
Peter Smith.
Fujitsu Australia

#226Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#225)

On Mon, Mar 8, 2021 at 1:26 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Mon, Mar 8, 2021 at 4:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Mar 8, 2021 at 10:04 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Sun, Mar 7, 2021 at 3:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Sun, Mar 7, 2021 at 7:35 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v51*

Few more comments on v51-0006-Fix-apply-worker-empty-prepare:
======================================================
1.
+/*
+ * A Prepare spoolfile hash entry. We create this entry in the
psf_hash. This is
+ * for maintaining a mapping between the name of the prepared
spoolfile, and the
+ * corresponding fileset handles of same.
+ */
+typedef struct PsfHashEntry
+{
+ char name[MAXPGPATH]; /* Hash key --- must be first */
+ bool allow_delete; /* ok to delete? */
+} PsfHashEntry;
+

IIUC, this has table is used for two purposes in the patch (a) to
check for existence of prepare spool file where we anyway to check it
on disk if not found in the hash table. (b) to allow the prepare spool
file to be removed on proc_exit.

I think we don't need the optimization provided by (a) because it will
be too rare a case to deserve any optimization, we might write a
comment in prepare_spoolfile_exists to indicate such an optimization.
For (b), we can use a simple list to track files to be removed on
proc_exit something like we do in CreateLockFile. I think avoiding
hash table usage will reduce the code and chances of bugs in this
area. It won't be easy to write a lot of automated tests to test this
functionality so it is better to avoid minor optimizations at this
stage.

Our data structure psf_hash also needs to be able to discover the
entry for a specific spool file and remove it. e.g. anything marked as
"allow_delete = false" (during prepare) must be able to be re-found
and removed from that structure at commit_prepared or
rollback_prepared time.

But, I think that is not reliable because after restart the entry
might not be present and we anyway need to check the presence of the
file on disk. Actually, you don't need any manipulation with list or
hash at commit_prepared or rollback_prepared, we should just remove
the entry for it at the prepare time and there should be an assert if
we find that entry in the in-memory structure.

Looking at CreateLockFile code, I don't see that it is ever deleting
entries from its "lock_files" list on-the-fly, so it's not really a
fair comparison to say just use a List like CreateLockFile.

Sure, but you can additionally traverse the list and find the required entry.

So, even though we (currently) only have a single data member
"allow_delete", I think the requirement to do a key lookup/delete
makes a HTAB a more appropriate data structure than a List.

Actually, that member is also not required at all because you just
need it till the time of prepare and then remove it.

OK, I plan to change like this.
- Now the whole hash simply means "delete-on-exit". If the key (aka
filename) exists, delete that file on exit. If not don't
- Remove the "allow_delete" member (as you say it can be redundant
using the new interpretation above)
- the *only* code that CREATES a key will be when
prepare_spoolfile_create is called from begin_prepare.
- at apply_handle_prepare time the key is REMOVED (so that file will
not be deleted in case of a restart / error before commit/rollback)

So, the only real place where you need to perform any search is at the
prepare time and I think it should always be the first element if we
use the list here. Am I missing something? If not, I don't see why you
want to prefer HTAB over a simple list? You can remove the first
element and probably have an assert to confirm it is the correct
element (by checking the path) you are removing.

--
With Regards,
Amit Kapila.

#227vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Ajin Cherian (#224)

On Mon, Mar 8, 2021 at 11:30 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Fri, Mar 5, 2021 at 9:25 PM vignesh C <vignesh21@gmail.com> wrote:

Created new patch v53:

Thanks for the updated patch.
I had noticed one issue, publisher does not get stopped normally in
the following case:
# Publisher steps
psql -d postgres -c "CREATE TABLE do_write(id serial primary key);"
psql -d postgres -c "INSERT INTO do_write VALUES(generate_series(1,10));"
psql -d postgres -c "CREATE PUBLICATION mypub FOR TABLE do_write;"

# Subscriber steps
psql -d postgres -p 9999 -c "CREATE TABLE do_write(id serial primary key);"
psql -d postgres -p 9999 -c "INSERT INTO do_write VALUES(1);" # to
cause a PK violation
psql -d postgres -p 9999 -c "CREATE SUBSCRIPTION mysub CONNECTION
'host=localhost port=5432 dbname=postgres' PUBLICATION mypub WITH
(two_phase = true);"

# prepare & commit prepared at publisher
psql -d postgres -c \
"begin; insert into do_write values (100); prepare transaction 'test1';"
psql -d postgres -c "commit prepared 'test1';"

Stop publisher:
./pg_ctl -D publisher stop
waiting for server to shut
down...............................................................
failed
pg_ctl: server does not shut down

This is because the following process does not exit:
postgres: walsender vignesh 127.0.0.1(41550) START_REPLICATION

It continuously loops at the below:
#0 0x00007f1c520d3bca in __libc_pread64 (fd=6, buf=0x555b1b3f7870,
count=8192, offset=0) at ../sysdeps/unix/sysv/linux/pread64.c:29
#1 0x0000555b1a8f6d20 in WALRead (state=0x555b1b3f1ce0,
buf=0x555b1b3f7870 "\n\321\002", startptr=16777216, count=8192, tli=1,
errinfo=0x7ffe693b78c0) at xlogreader.c:1116
#2 0x0000555b1ac8ce10 in logical_read_xlog_page
(state=0x555b1b3f1ce0, targetPagePtr=16777216, reqLen=8192,
targetRecPtr=23049936, cur_page=0x555b1b3f7870 "\n\321\002")
at walsender.c:837
#3 0x0000555b1a8f6040 in ReadPageInternal (state=0x555b1b3f1ce0,
pageptr=23044096, reqLen=5864) at xlogreader.c:608
#4 0x0000555b1a8f5849 in XLogReadRecord (state=0x555b1b3f1ce0,
errormsg=0x7ffe693b79c0) at xlogreader.c:329
#5 0x0000555b1ac8ff4a in XLogSendLogical () at walsender.c:2846
#6 0x0000555b1ac8f1e5 in WalSndLoop (send_data=0x555b1ac8ff0e
<XLogSendLogical>) at walsender.c:2289
#7 0x0000555b1ac8db2a in StartLogicalReplication (cmd=0x555b1b3b78b8)
at walsender.c:1206
#8 0x0000555b1ac8e4dd in exec_replication_command (
cmd_string=0x555b1b331670 "START_REPLICATION SLOT \"mysub\"
LOGICAL 0/0 (proto_version '2', two_phase 'on', publication_names
'\"mypub\"')") at walsender.c:1646
#9 0x0000555b1ad04460 in PostgresMain (argc=1, argv=0x7ffe693b7cc0,
dbname=0x555b1b35cc58 "postgres", username=0x555b1b35cc38 "vignesh")
at postgres.c:4323

I felt the publisher should get stopped in this case.
Thoughts?

Regards,
Vignesh

#228Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#227)

On Mon, Mar 8, 2021 at 4:20 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, Mar 8, 2021 at 11:30 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Fri, Mar 5, 2021 at 9:25 PM vignesh C <vignesh21@gmail.com> wrote:

Created new patch v53:

Thanks for the updated patch.
I had noticed one issue, publisher does not get stopped normally in
the following case:
# Publisher steps
psql -d postgres -c "CREATE TABLE do_write(id serial primary key);"
psql -d postgres -c "INSERT INTO do_write VALUES(generate_series(1,10));"
psql -d postgres -c "CREATE PUBLICATION mypub FOR TABLE do_write;"

# Subscriber steps
psql -d postgres -p 9999 -c "CREATE TABLE do_write(id serial primary key);"
psql -d postgres -p 9999 -c "INSERT INTO do_write VALUES(1);" # to
cause a PK violation
psql -d postgres -p 9999 -c "CREATE SUBSCRIPTION mysub CONNECTION
'host=localhost port=5432 dbname=postgres' PUBLICATION mypub WITH
(two_phase = true);"

# prepare & commit prepared at publisher
psql -d postgres -c \
"begin; insert into do_write values (100); prepare transaction 'test1';"
psql -d postgres -c "commit prepared 'test1';"

Stop publisher:
./pg_ctl -D publisher stop
waiting for server to shut
down...............................................................
failed
pg_ctl: server does not shut down

This is because the following process does not exit:
postgres: walsender vignesh 127.0.0.1(41550) START_REPLICATION

It continuously loops at the below:

What happens if you don't set the two_phase option? If that also leads
to the same error then can you please also check this case on the
HEAD?

--
With Regards,
Amit Kapila.

#229Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#220)

On Mon, Mar 8, 2021 at 7:17 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v52*

Few more comments:
==================
1.
/* CREATE_REPLICATION_SLOT slot TEMPORARY LOGICAL plugin */
- | K_CREATE_REPLICATION_SLOT IDENT opt_temporary K_LOGICAL IDENT
create_slot_opt_list
+ | K_CREATE_REPLICATION_SLOT IDENT opt_temporary opt_two_phase
K_LOGICAL IDENT create_slot_opt_list

I think the comment above can have TWO_PHASE option listed.

2.
+static void
+apply_handle_begin_prepare(StringInfo s)
+{
..
/*
+ * From now, until the handle_prepare we are spooling to the
+ * current psf.
+ */
+ psf_cur.is_spooling = true;
+ }
+ }
+
+ remote_final_lsn = begin_data.final_lsn;
+
+ in_remote_transaction = true;
+
+ pgstat_report_activity(STATE_RUNNING, NULL);

In case you are spooling the changes, you don't need to set
remote_final_lsn and in_remote_transaction. You only need to probably
do pgstat_report_activity.

3.
Similarly, you don't need to set remote_final_lsn as false in
apply_handle_prepare for the spooling case, rather there should be an
Assert stating that remote_final_lsn is false.

4.
snprintf(path, MAXPGPATH, "pg_twophase/%u-%s.prep_changes", subid, gid);

I feel it is better to create these in pg_logical/twophase as that is
where we store other logical replication related files.

--
With Regards,
Amit Kapila.

#230vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#228)

On Mon, Mar 8, 2021 at 6:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Mar 8, 2021 at 4:20 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, Mar 8, 2021 at 11:30 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Fri, Mar 5, 2021 at 9:25 PM vignesh C <vignesh21@gmail.com> wrote:

Created new patch v53:

Thanks for the updated patch.
I had noticed one issue, publisher does not get stopped normally in
the following case:
# Publisher steps
psql -d postgres -c "CREATE TABLE do_write(id serial primary key);"
psql -d postgres -c "INSERT INTO do_write VALUES(generate_series(1,10));"
psql -d postgres -c "CREATE PUBLICATION mypub FOR TABLE do_write;"

# Subscriber steps
psql -d postgres -p 9999 -c "CREATE TABLE do_write(id serial primary key);"
psql -d postgres -p 9999 -c "INSERT INTO do_write VALUES(1);" # to
cause a PK violation
psql -d postgres -p 9999 -c "CREATE SUBSCRIPTION mysub CONNECTION
'host=localhost port=5432 dbname=postgres' PUBLICATION mypub WITH
(two_phase = true);"

# prepare & commit prepared at publisher
psql -d postgres -c \
"begin; insert into do_write values (100); prepare transaction 'test1';"
psql -d postgres -c "commit prepared 'test1';"

Stop publisher:
./pg_ctl -D publisher stop
waiting for server to shut
down...............................................................
failed
pg_ctl: server does not shut down

This is because the following process does not exit:
postgres: walsender vignesh 127.0.0.1(41550) START_REPLICATION

It continuously loops at the below:

What happens if you don't set the two_phase option? If that also leads
to the same error then can you please also check this case on the
HEAD?

It succeeds without the two_phase option.
I had further analyzed this issue, see the details of it below:
We have the below code in WalSndDone function which will handle the
walsender exit:
if (WalSndCaughtUp && sentPtr == replicatedPtr &&
!pq_is_send_pending())
{
QueryCompletion qc;

/* Inform the standby that XLOG streaming is done */
SetQueryCompletion(&qc, CMDTAG_COPY, 0);
EndCommand(&qc, DestRemote, false);
pq_flush();

proc_exit(0);
}

But in case of with two_phase option, replicatedPtr and sentPtr never
becomes same:
(gdb) p /x replicatedPtr
$8 = 0x15faa70
(gdb) p /x sentPtr
$10 = 0x15fac50

Whereas in case of without two_phase option, replicatedPtr and sentPtr
becomes same and exits:
(gdb) p /x sentPtr
$7 = 0x15fae10
(gdb) p /x replicatedPtr
$8 = 0x15fae10

I think in case of two_phase option, replicatedPtr and sentPtr never
becomes the same which causes this process to hang.

Regards,
Vignesh

#231Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#223)

On Mon, Mar 8, 2021 at 4:58 PM vignesh C <vignesh21@gmail.com> wrote:

LOGICAL_REP_MSG_TYPE = 'Y',
+       LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
+       LOGICAL_REP_MSG_PREPARE = 'P',
+       LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
+       LOGICAL_REP_MSG_ROLLBACK_PREPARED = 'r',
LOGICAL_REP_MSG_STREAM_START = 'S',
LOGICAL_REP_MSG_STREAM_END = 'E',
LOGICAL_REP_MSG_STREAM_COMMIT = 'c',
-       LOGICAL_REP_MSG_STREAM_ABORT = 'A'
+       LOGICAL_REP_MSG_STREAM_ABORT = 'A',
+       LOGICAL_REP_MSG_STREAM_PREPARE = 'p'
} LogicalRepMsgType;
As we start adding more and more features, we will have to start
adding more message types, using meaningful characters might become
difficult. Should we start using numeric instead for the new feature
getting added?

This may or may not become a problem sometime in the future, but I
think the feedback is not really specific to the current patch set so
I am skipping it at this time.

If you want, maybe create it as a separate thread, Is it OK?

----
Kind Regards,
Peter Smith.
Fujitsu Australia

#232vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#231)

On Tue, Mar 9, 2021 at 9:14 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Mon, Mar 8, 2021 at 4:58 PM vignesh C <vignesh21@gmail.com> wrote:

LOGICAL_REP_MSG_TYPE = 'Y',
+       LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
+       LOGICAL_REP_MSG_PREPARE = 'P',
+       LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
+       LOGICAL_REP_MSG_ROLLBACK_PREPARED = 'r',
LOGICAL_REP_MSG_STREAM_START = 'S',
LOGICAL_REP_MSG_STREAM_END = 'E',
LOGICAL_REP_MSG_STREAM_COMMIT = 'c',
-       LOGICAL_REP_MSG_STREAM_ABORT = 'A'
+       LOGICAL_REP_MSG_STREAM_ABORT = 'A',
+       LOGICAL_REP_MSG_STREAM_PREPARE = 'p'
} LogicalRepMsgType;
As we start adding more and more features, we will have to start
adding more message types, using meaningful characters might become
difficult. Should we start using numeric instead for the new feature
getting added?

This may or may not become a problem sometime in the future, but I
think the feedback is not really specific to the current patch set so
I am skipping it at this time.

If you want, maybe create it as a separate thread, Is it OK?

I was thinking of changing the newly added message types to something
like below:

LOGICAL_REP_MSG_TYPE = 'Y',
+       LOGICAL_REP_MSG_BEGIN_PREPARE = 1,
+       LOGICAL_REP_MSG_PREPARE = 2,
+       LOGICAL_REP_MSG_COMMIT_PREPARED = 3,
+       LOGICAL_REP_MSG_ROLLBACK_PREPARED = 4,
LOGICAL_REP_MSG_STREAM_START = 'S',
LOGICAL_REP_MSG_STREAM_END = 'E',
LOGICAL_REP_MSG_STREAM_COMMIT = 'c',
-       LOGICAL_REP_MSG_STREAM_ABORT = 'A'
+       LOGICAL_REP_MSG_STREAM_ABORT = 'A',
+       LOGICAL_REP_MSG_STREAM_PREPARE = 5
} LogicalRepMsgType;

Changing these values at a later time may become difficult as it can
break backward compatibility. But if you feel the existing values are
better we can keep it as it is and think of it later when we add more
message types.

Regards,
Vignesh

#233Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#231)

On Tue, Mar 9, 2021 at 9:15 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Mon, Mar 8, 2021 at 4:58 PM vignesh C <vignesh21@gmail.com> wrote:

LOGICAL_REP_MSG_TYPE = 'Y',
+       LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
+       LOGICAL_REP_MSG_PREPARE = 'P',
+       LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
+       LOGICAL_REP_MSG_ROLLBACK_PREPARED = 'r',
LOGICAL_REP_MSG_STREAM_START = 'S',
LOGICAL_REP_MSG_STREAM_END = 'E',
LOGICAL_REP_MSG_STREAM_COMMIT = 'c',
-       LOGICAL_REP_MSG_STREAM_ABORT = 'A'
+       LOGICAL_REP_MSG_STREAM_ABORT = 'A',
+       LOGICAL_REP_MSG_STREAM_PREPARE = 'p'
} LogicalRepMsgType;
As we start adding more and more features, we will have to start
adding more message types, using meaningful characters might become
difficult. Should we start using numeric instead for the new feature
getting added?

This may or may not become a problem sometime in the future, but I
think the feedback is not really specific to the current patch set so
I am skipping it at this time.

+1.

--
With Regards,
Amit Kapila.

#234Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#220)
4 attachment(s)

Please find attached the latest patch set v54*

Differences from v53* are:

* Rebased to HEAD @ today

* Addresses some recent feedback issues for patch 0001

Feedback from Amit @ 7/March [ak]
- (36) Fixed. Comment about the psf replay.
- (37) Fixed. prepare_spoolfile_create, check file already exists (on
disk) instead of just checking HTAB.
- (38) Fixed. Added comment about potential overwrite of existing file.

Feedback from Vignesh @ 8/March [vc]
- (45) Fixed. Changed some comment to be single-line comments (e.g. if
they only apply to a single following stmt)
- (46) Fixed. prepare_spoolfile_create, refactored slightly to make
more use of common code in if/else
- (47) Skipped. This was feedback suggesting using ints instead of
character values for message type enum.

-----
[ak] /messages/by-id/CAA4eK1+dO07RrQwfHAK5jDP9qiXik4-MVzy+coEG09shWTJFGg@mail.gmail.com
[vc] /messages/by-id/CALDaNm29gOsCUtNkvHgqbbD1kbM8m67h4AqfmUWG1oTnfuPFxA@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v54-0001-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v54-0001-Add-support-for-apply-at-prepare-time-to-built-i.patch
v54-0002-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v54-0002-Support-2PC-txn-subscriber-tests.patch
v54-0004-Fix-apply-worker-empty-prepare-dev-logs.patchapplication/octet-stream; name=v54-0004-Fix-apply-worker-empty-prepare-dev-logs.patch
v54-0003-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v54-0003-Support-2PC-txn-Subscription-option.patch
#235Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#230)

On Mon, Mar 8, 2021 at 8:09 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, Mar 8, 2021 at 6:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I think in case of two_phase option, replicatedPtr and sentPtr never
becomes the same which causes this process to hang.

The reason is that because on subscriber you have created a situation
(PK violation) where it is not able to proceed with initial tablesync
and then the apply worker is waiting for tablesync to complete, so it
is not able to process new messages. I think as soon as you remove the
duplicate row from the table it will be able to proceed.

Now, we can see a similar situation even in HEAD without 2PC though it
is a bit tricky to reproduce. Basically, when the tablesync worker is
in SUBREL_STATE_CATCHUP state and it has a lot of WAL to process then
the apply worker is just waiting for it to finish applying all the WAL
and won't process any message. So at that time, if you try to stop the
publisher you will see the same behavior. I have simulated a lot of
WAL processing by manually debugging the tablesync and not proceeding
for some time. You can also try by adding sleep after the tablesync
worker has set the state as SUBREL_STATE_CATCHUP.

So, I feel this is just an expected behavior and users need to
manually fix the situation where tablesync worker is not able to
proceed due to PK violation. Does this make sense?

--
With Regards,
Amit Kapila.

#236vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#235)

On Tue, Mar 9, 2021 at 11:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Mar 8, 2021 at 8:09 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, Mar 8, 2021 at 6:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I think in case of two_phase option, replicatedPtr and sentPtr never
becomes the same which causes this process to hang.

The reason is that because on subscriber you have created a situation
(PK violation) where it is not able to proceed with initial tablesync
and then the apply worker is waiting for tablesync to complete, so it
is not able to process new messages. I think as soon as you remove the
duplicate row from the table it will be able to proceed.

Now, we can see a similar situation even in HEAD without 2PC though it
is a bit tricky to reproduce. Basically, when the tablesync worker is
in SUBREL_STATE_CATCHUP state and it has a lot of WAL to process then
the apply worker is just waiting for it to finish applying all the WAL
and won't process any message. So at that time, if you try to stop the
publisher you will see the same behavior. I have simulated a lot of
WAL processing by manually debugging the tablesync and not proceeding
for some time. You can also try by adding sleep after the tablesync
worker has set the state as SUBREL_STATE_CATCHUP.

So, I feel this is just an expected behavior and users need to
manually fix the situation where tablesync worker is not able to
proceed due to PK violation. Does this make sense?

Thanks for the detailed explanation, this behavior looks similar to
the issue you described, we can ignore this issue as it seems this
issue is not because of this patch. I also noticed that if we handle
the PK violation error by deleting that record which causes the PK
violation error, the server is able to stop immediately without any
issue.

Regards,
Vignesh

#237vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#234)

On Tue, Mar 9, 2021 at 10:46 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v54*

Differences from v53* are:

* Rebased to HEAD @ today

* Addresses some recent feedback issues for patch 0001

Feedback from Amit @ 7/March [ak]
- (36) Fixed. Comment about the psf replay.
- (37) Fixed. prepare_spoolfile_create, check file already exists (on
disk) instead of just checking HTAB.
- (38) Fixed. Added comment about potential overwrite of existing file.

Feedback from Vignesh @ 8/March [vc]
- (45) Fixed. Changed some comment to be single-line comments (e.g. if
they only apply to a single following stmt)
- (46) Fixed. prepare_spoolfile_create, refactored slightly to make
more use of common code in if/else
- (47) Skipped. This was feedback suggesting using ints instead of
character values for message type enum.

Thanks for the updated patch.
Few comments:

+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+       "CREATE PUBLICATION tap_pub");
+$node_publisher->safe_psql('postgres',
+       "ALTER PUBLICATION tap_pub ADD TABLE tab_full");

This can be changed to :
$node_publisher->safe_psql('postgres',
"CREATE PUBLICATION tap_pub FOR TABLE tab_full");

We can make similar changes in:
+# node_A (pub) -> node_B (sub)
+my $node_A_connstr = $node_A->connstr . ' dbname=postgres';
+$node_A->safe_psql('postgres',
+       "CREATE PUBLICATION tap_pub_A");
+$node_A->safe_psql('postgres',
+       "ALTER PUBLICATION tap_pub_A ADD TABLE tab_full");
+my $appname_B = 'tap_sub_B';
+$node_B->safe_psql('postgres', "
+       CREATE SUBSCRIPTION tap_sub_B
+       CONNECTION '$node_A_connstr application_name=$appname_B'
+       PUBLICATION tap_pub_A");
+
+# node_B (pub) -> node_C (sub)
+my $node_B_connstr = $node_B->connstr . ' dbname=postgres';
+$node_B->safe_psql('postgres',
+       "CREATE PUBLICATION tap_pub_B");
+$node_B->safe_psql('postgres',
+       "ALTER PUBLICATION tap_pub_B ADD TABLE tab_full");
+# rollback post the restart
+$node_publisher->safe_psql('postgres',
+       "ROLLBACK PREPARED 'test_prepared_tab';");
+$node_publisher->poll_query_until('postgres', $caughtup_query)
+       or die "Timed out while waiting for subscriber to catch up";
+
+# check inserts are rolled back
+$result = $node_subscriber->safe_psql('postgres',
+       "SELECT count(*) FROM tab_full where a IN (12,13);");
+is($result, qq(0), 'Rows inserted via 2PC are visible on the subscriber');

"Rows inserted via 2PC are visible on the subscriber"
should be something like:
"Rows rolled back are not on the subscriber"

git diff --check
src/backend/replication/logical/worker.c:3704: trailing whitespace.

Regards,
Vignesh

#238Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#223)

On Mon, Mar 8, 2021 at 4:58 PM vignesh C <vignesh21@gmail.com> wrote:

+               while (AnyTablesyncInProgress())
+               {
+                       process_syncing_tables(begin_data.final_lsn);
+
+                       /* This latch is to prevent 100% CPU looping. */
+                       (void) WaitLatch(MyLatch,
+                                                        WL_LATCH_SET
| WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,
+                                                        1000L,
WAIT_EVENT_LOGICAL_SYNC_STATE_CHANGE);
+                       ResetLatch(MyLatch);
+               }
Should we have CHECK_FOR_INTERRUPTS inside the while loop?

The process_syncing_tables will end up in the
process_syncing_tables_for_apply() function. And in that function IIUC
the apply worker is spending most of the time waiting for the
tablesync to achieve SYNCDONE state.
See wait_for_relation_state_change(rstate->relid, SUBREL_STATE_SYNCDONE);

Now, notice the wait_for_relation_state_change already has
CHECK_FOR_INTERRUPTS();

So, AFAIK it isn't necessary to put another CHECK_FOR_INTERRUPTS at
the outer loop.

Thoughts?

------
Kind Regards,
Peter Smith.
Fujitsu Australia.

#239Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#238)

On Tue, Mar 9, 2021 at 3:02 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Mon, Mar 8, 2021 at 4:58 PM vignesh C <vignesh21@gmail.com> wrote:

+               while (AnyTablesyncInProgress())
+               {
+                       process_syncing_tables(begin_data.final_lsn);
+
+                       /* This latch is to prevent 100% CPU looping. */
+                       (void) WaitLatch(MyLatch,
+                                                        WL_LATCH_SET
| WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,
+                                                        1000L,
WAIT_EVENT_LOGICAL_SYNC_STATE_CHANGE);
+                       ResetLatch(MyLatch);
+               }
Should we have CHECK_FOR_INTERRUPTS inside the while loop?

The process_syncing_tables will end up in the
process_syncing_tables_for_apply() function. And in that function IIUC
the apply worker is spending most of the time waiting for the
tablesync to achieve SYNCDONE state.
See wait_for_relation_state_change(rstate->relid, SUBREL_STATE_SYNCDONE);

But, I think for large copy, it won't wait in that state because the
tablesync worker will still be in SUBREL_STATE_DATASYNC state and we
wait for SUBREL_STATE_SYNCDONE state only after the initial copy is
finished. So, I think it is a good idea to call CHECK_FOR_INTERRUPTS
in this loop.

--
With Regards,
Amit Kapila.

#240Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: vignesh C (#223)
4 attachment(s)

On Mon, Mar 8, 2021 at 4:59 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, Mar 8, 2021 at 7:17 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v52*

Few comments:

+logicalrep_read_begin_prepare(StringInfo in,
LogicalRepBeginPrepareData *begin_data)
+{
+       /* read fields */
+       begin_data->final_lsn = pq_getmsgint64(in);
+       if (begin_data->final_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "final_lsn not set in begin message");
+       begin_data->end_lsn = pq_getmsgint64(in);
+       if (begin_data->end_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "end_lsn not set in begin message");
+       begin_data->committime = pq_getmsgint64(in);
+       begin_data->xid = pq_getmsgint(in, 4);
+
+       /* read gid (copy it into a pre-allocated buffer) */
+       strcpy(begin_data->gid, pq_getmsgstring(in));
+}
In logicalrep_read_begin_prepare we validate final_lsn & end_lsn. But
this validation is not done in logicalrep_read_commit_prepared and
logicalrep_read_rollback_prepared. Should we keep it consistent?

Updated.

@@ -170,5 +237,4 @@ extern void
logicalrep_write_stream_abort(StringInfo out, TransactionId xid,

TransactionId subxid);
extern void logicalrep_read_stream_abort(StringInfo in, TransactionId
*xid,

TransactionId *subxid);
-
#endif /* LOGICAL_PROTO_H
*/
This change is not required.

Removed.

@@ -242,15 +244,16 @@ create_replication_slot:
$$ = (Node *) cmd;
}
/* CREATE_REPLICATION_SLOT slot TEMPORARY
LOGICAL plugin */
-                       | K_CREATE_REPLICATION_SLOT IDENT
opt_temporary K_LOGICAL IDENT create_slot_opt_list
+                       | K_CREATE_REPLICATION_SLOT IDENT
opt_temporary opt_two_phase K_LOGICAL IDENT create_slot_opt_list
{
CreateReplicationSlotCmd *cmd;
cmd =
makeNode(CreateReplicationSlotCmd);
cmd->kind =
REPLICATION_KIND_LOGICAL;
cmd->slotname = $2;
cmd->temporary = $3;
-                                       cmd->plugin = $5;
-                                       cmd->options = $6;
+                                       cmd->two_phase = $4;
+                                       cmd->plugin = $6;
+                                       cmd->options = $7;
$$ = (Node *) cmd;
}
Should we document two_phase in the below section:
CREATE_REPLICATION_SLOT slot_name [ TEMPORARY ] { PHYSICAL [
RESERVE_WAL ] | LOGICAL output_plugin [ EXPORT_SNAPSHOT |
NOEXPORT_SNAPSHOT | USE_SNAPSHOT ] }
Create a physical or logical replication slot. See Section 27.2.6 for
more about replication slots.

Updated in protocol.sgml as well as the comment above.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v55-0001-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v55-0001-Add-support-for-apply-at-prepare-time-to-built-i.patch
v55-0004-Fix-apply-worker-empty-prepare-dev-logs.patchapplication/octet-stream; name=v55-0004-Fix-apply-worker-empty-prepare-dev-logs.patch
v55-0003-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v55-0003-Support-2PC-txn-Subscription-option.patch
v55-0002-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v55-0002-Support-2PC-txn-subscriber-tests.patch
#241Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#240)

On Tue, Mar 9, 2021 at 3:22 PM Ajin Cherian <itsajin@gmail.com> wrote:

Few comments:
==================

1.
+/*
+ * Handle the PREPARE spoolfile (if any)
+ *
+ * It can be necessary to redirect the PREPARE messages to a spoolfile (see
+ * apply_handle_begin_prepare) and then replay them back at the COMMIT PREPARED
+ * time. If needed, this is the common function to do that file redirection.
+ *

I think the last sentence ("If needed, this is the ..." in the above
comments is not required.

2.
+prepare_spoolfile_exists(char *path)
+{
+ bool found;
+
+ File fd = PathNameOpenFile(path, O_RDONLY | PG_BINARY);
+
+ found = fd >= 0;
+ if (fd >= 0)
+ FileClose(fd);

Can we avoid using bool variable in the above code with something like below?

File fd = PathNameOpenFile(path, O_RDONLY | PG_BINARY);

if (fd >= 0)
{
FileClose(fd);
return true;
}

return false;

3. In prepare_spoolfile_replay_messages(), it is better to free the
memory allocated for temporary strings buffer and s2.

4.
+ /* check if the file already exists. */
+ file_found = prepare_spoolfile_exists(path);
+
+ if (!file_found)
+ {
+ elog(DEBUG1, "Not found file \"%s\". Create it.", path);
+ psf_cur.vfd = PathNameOpenFile(path, O_RDWR | O_CREAT | O_TRUNC | PG_BINARY);
+ if (psf_cur.vfd < 0)
+ ereport(ERROR,
+ (errcode_for_file_access(),
+ errmsg("could not create file \"%s\": %m", path)));
+ }
+ else
+ {
+ /*
+ * Open the file and seek to the beginning because we always want to
+ * create/overwrite this file.
+ */
+ elog(DEBUG1, "Found file \"%s\". Overwrite it.", path);
+ psf_cur.vfd = PathNameOpenFile(path, O_RDWR | O_CREAT | O_TRUNC | PG_BINARY);
+ if (psf_cur.vfd < 0)
+ ereport(ERROR,
+ (errcode_for_file_access(),
+ errmsg("could not open file \"%s\": %m", path)));
+ }

Here, whether the file exists or not you are using the same flags to
open it which seems correct to me but the code looks a bit odd. Why do
we in this case even bother to check if it exists? Is it for DEBUG
message, if so not sure if that is worth it? I am also thinking why
not have a function prepare_spoolfile_open similar to *_close and call
it from all the places with the mode where you can indicate whether
you want to create or open the file.

5. I think prepare_spoolfile_close can be extended to take PsfFile as
input and then it can be also used from
prepare_spoolfile_replay_messages.

6. I think we should also write some commentary about prepared
transactions atop of worker.c as we have done for streamed
transactions.

--
With Regards,
Amit Kapila.

#242Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#241)
2 attachment(s)

I ran a 5 cascaded setup of pub-subs on the latest patchset which starts
pgbench on the first server and waits till the data on the fifth server
matches the first.
This is based on a test script created by Erik Rijkers. The tests run fine
and the 5th server achieves data consistency in around a minute.
Attaching the test script and the test run log.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

erik_5_cascade.shtext/x-sh; charset=US-ASCII; name=erik_5_cascade.sh
erik_5_cascade_run.logapplication/octet-stream; name=erik_5_cascade_run.log
#243Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#241)

On Tue, Mar 9, 2021 at 9:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Mar 9, 2021 at 3:22 PM Ajin Cherian <itsajin@gmail.com> wrote:

Few comments:
==================

3. In prepare_spoolfile_replay_messages(), it is better to free the
memory allocated for temporary strings buffer and s2.

I guess this was suggested because it is what the
apply_handle_stream_commit() function was doing for very similar code.
But now the same code cannot work this time for the
*_replay_messages() function because those buffers are allocated with
the TopTransactionContext and they are already being freed as a
side-effect when the last psf message (the LOGICAL_REP_MSG_PREPARE) is
replayed/dispatched and ending the transaction. So attempting to free
them again causes segmentation violation (I already fixed this exact
problem last week when the pfree code was still in the code).

5. I think prepare_spoolfile_close can be extended to take PsfFile as
input and then it can be also used from
prepare_spoolfile_replay_messages.

No, the *_close() is intended only for ending the "current" psf (the
global psf_cur) which was being spooled. The function comment says the
same. The *_close() is paired with the *_create() which created the
psf_cur.

Whereas, the reply fd close at commit time is just a locally opened fd
unrelated to psf_cur. This close is deliberately self-contained in the
*_replay_messages() function, which is not dissimilar to what the
other streaming spool file code does - e.g. notice
apply_handle_stream_commit function simply closes its own fd using
BufFileClose; it doesn’t delegate stream_close_file() to do it.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#244Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#234)
4 attachment(s)

Please find attached the latest patch set v56*

Differences from v55* are:

* Rebased to HEAD @ today

* Addresses the following feedback issues:

(35) [ak-0307] Skipped. Suggestion to replace HTAB with List for
tracking what psf files to delete at proc-exit; Although the idea had
merit at the time, it turned out that the due to a separate bugfix
from colleague it was necessary that we also know the count of psf
files still yet to be replayed. This count was easy to know using the
existing HTAB entries, but the List entry would have been deleted
already at prepare time, so same could not be done easily if we
changed to List. So we will keep HTAB instead of List for now.

(44) [vc-0308] Fixed. Add CHECK_FOR_INTERRUPTS() to apply worker loop.

(51) [ak-0308] Fixed. New location for psf files "pg_logical/twophase".

(54) [vc-0309] Fixed. Change rollback test description text.

(55) [ak-0309] Fixed. Change to comment text of prepare_spoolfile_handler.

(56) [ak-0309] Fixed. Remove boolean variable from prepare_spoolfile_exists.

(57) [ak-0309] Skipped. Suggestion to pfree memory; it is already freed.

(58) [ak-0309] Fixed. Common code for found/not-found psf at *_create() time.

(59) [ak-0309] Skipped. Suggestion to use *_close() from *_replay();
not compatible with intent.

(60) [ak-0309] Fixed. General comment about PSF added top of worker.c

-----
[vc-0308] /messages/by-id/CALDaNm29gOsCUtNkvHgqbbD1kbM8m67h4AqfmUWG1oTnfuPFxA@mail.gmail.com
[vc-0309] /messages/by-id/CALDaNm0QuncAis5OqtjzOxAPTZRn545JLqfjFEJwyRjUH-XvEw@mail.gmail.com
[ak-0307] /messages/by-id/CAA4eK1+dO07RrQwfHAK5jDP9qiXik4-MVzy+coEG09shWTJFGg@mail.gmail.com
[ak-0308] /messages/by-id/CAA4eK1+oSUU77T92FueDJWsp=FjTroNaNC-K45Dgdr7f18aBFA@mail.gmail.com
[ak-0309] /messages/by-id/CAA4eK1Jra658uuT8zo1DcZLzpNvo4oeorMcCuSeyY2zvr3_KBA@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v56-0003-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v56-0003-Support-2PC-txn-Subscription-option.patch
v56-0004-Fix-apply-worker-empty-prepare-dev-logs.patchapplication/octet-stream; name=v56-0004-Fix-apply-worker-empty-prepare-dev-logs.patch
v56-0002-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v56-0002-Support-2PC-txn-subscriber-tests.patch
v56-0001-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v56-0001-Add-support-for-apply-at-prepare-time-to-built-i.patch
#245Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#244)
4 attachment(s)

Please find attached the latest patch set v57*

Differences from v56* are:

* Rebased to HEAD @ today

* Addresses the following feedback issues:

(24) [vc-0305] Done. Ran pgindent for all patch 0001 source files.

(49) [ak-0308] Fixed. In apply_handle_begion_prepare, don't set
in_remote_transaction if psf spooling

(50) [ak-0308] Fixed. In apply_handle_prepare, assert
!in_remote_transaction if psf spooling.

(52) [vc-0309] Done. Patch 0002. Simplify the way test 020 creates the
publication.

(53) [vc-0309] Done. Patch 0002. Simplify the way test 022 creates the
publication.

-----
[vc-0305] /messages/by-id/CALDaNm1rRG2EUus+mFrqRzEshZwJZtxja0rn_n3qXGAygODfOA@mail.gmail.com
[vc-0309] /messages/by-id/CALDaNm0QuncAis5OqtjzOxAPTZRn545JLqfjFEJwyRjUH-XvEw@mail.gmail.com
[ak-0308] /messages/by-id/CAA4eK1+oSUU77T92FueDJWsp=FjTroNaNC-K45Dgdr7f18aBFA@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v56-0001-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v56-0001-Add-support-for-apply-at-prepare-time-to-built-i.patch
v56-0002-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v56-0002-Support-2PC-txn-subscriber-tests.patch
v56-0004-Fix-apply-worker-empty-prepare-dev-logs.patchapplication/octet-stream; name=v56-0004-Fix-apply-worker-empty-prepare-dev-logs.patch
v56-0003-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v56-0003-Support-2PC-txn-Subscription-option.patch
#246Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#245)
4 attachment(s)

On Thu, Mar 11, 2021 at 12:46 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v57*

Differences from v56* are:

* Rebased to HEAD @ today

* Addresses the following feedback issues:

(24) [vc-0305] Done. Ran pgindent for all patch 0001 source files.

(49) [ak-0308] Fixed. In apply_handle_begion_prepare, don't set
in_remote_transaction if psf spooling

(50) [ak-0308] Fixed. In apply_handle_prepare, assert
!in_remote_transaction if psf spooling.

(52) [vc-0309] Done. Patch 0002. Simplify the way test 020 creates the
publication.

(53) [vc-0309] Done. Patch 0002. Simplify the way test 022 creates the
publication.

-----
[vc-0305] /messages/by-id/CALDaNm1rRG2EUus+mFrqRzEshZwJZtxja0rn_n3qXGAygODfOA@mail.gmail.com
[vc-0309] /messages/by-id/CALDaNm0QuncAis5OqtjzOxAPTZRn545JLqfjFEJwyRjUH-XvEw@mail.gmail.com
[ak-0308] /messages/by-id/CAA4eK1+oSUU77T92FueDJWsp=FjTroNaNC-K45Dgdr7f18aBFA@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Oops. I posted the wrong patch set in my previous email.

Here are the correct ones for v57*.

Sorry for any confusion.

Attachments:

v57-0002-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v57-0002-Support-2PC-txn-subscriber-tests.patch
v57-0001-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v57-0001-Add-support-for-apply-at-prepare-time-to-built-i.patch
v57-0004-Fix-apply-worker-empty-prepare-dev-logs.patchapplication/octet-stream; name=v57-0004-Fix-apply-worker-empty-prepare-dev-logs.patch
v57-0003-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v57-0003-Support-2PC-txn-Subscription-option.patch
#247vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#246)

On Thu, Mar 11, 2021 at 7:20 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Thu, Mar 11, 2021 at 12:46 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v57*

Differences from v56* are:

* Rebased to HEAD @ today

* Addresses the following feedback issues:

(24) [vc-0305] Done. Ran pgindent for all patch 0001 source files.

(49) [ak-0308] Fixed. In apply_handle_begion_prepare, don't set
in_remote_transaction if psf spooling

(50) [ak-0308] Fixed. In apply_handle_prepare, assert
!in_remote_transaction if psf spooling.

(52) [vc-0309] Done. Patch 0002. Simplify the way test 020 creates the
publication.

(53) [vc-0309] Done. Patch 0002. Simplify the way test 022 creates the
publication.

-----
[vc-0305] /messages/by-id/CALDaNm1rRG2EUus+mFrqRzEshZwJZtxja0rn_n3qXGAygODfOA@mail.gmail.com
[vc-0309] /messages/by-id/CALDaNm0QuncAis5OqtjzOxAPTZRn545JLqfjFEJwyRjUH-XvEw@mail.gmail.com
[ak-0308] /messages/by-id/CAA4eK1+oSUU77T92FueDJWsp=FjTroNaNC-K45Dgdr7f18aBFA@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Oops. I posted the wrong patch set in my previous email.

Here are the correct ones for v57*.

Thanks for the updated patch, few comments:
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -67,7 +67,8 @@ parse_subscription_options(List *options,
                                                   char **synchronous_commit,
                                                   bool *refresh,
                                                   bool *binary_given,
bool *binary,
-                                                  bool
*streaming_given, bool *streaming)
+                                                  bool
*streaming_given, bool *streaming,
+                                                  bool
*twophase_given, bool *twophase)

I felt twophase_given can be a local variable, it need not be added as
a function parameter as it is not used outside the function.

The corresponding changes can be done here too:
@@ -358,6 +402,8 @@ CreateSubscription(CreateSubscriptionStmt *stmt,
bool isTopLevel)
bool copy_data;
bool streaming;
bool streaming_given;
+ bool twophase;
+ bool twophase_given;
char *synchronous_commit;
char *conninfo;
char *slotname;
@@ -382,7 +428,8 @@ CreateSubscription(CreateSubscriptionStmt *stmt,
bool isTopLevel)
&synchronous_commit,
NULL,
/* no "refresh" */

&binary_given, &binary,
-
&streaming_given, &streaming);
+
&streaming_given, &streaming,
+
&twophase_given, &twophase);

--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -2930,6 +2930,7 @@ maybe_reread_subscription(void)
                strcmp(newsub->slotname, MySubscription->slotname) != 0 ||
                newsub->binary != MySubscription->binary ||
                newsub->stream != MySubscription->stream ||
+               newsub->twophase != MySubscription->twophase ||
                !equal(newsub->publications, MySubscription->publications))
I think this is not possible, should this be an assert.

@@ -252,6 +254,16 @@ parse_output_parameters(List *options, uint32
*protocol_version,

                        *enable_streaming = defGetBoolean(defel);
                }
+               else if (strcmp(defel->defname, "two_phase") == 0)
+               {
+                       if (twophase_given)
+                               ereport(ERROR,
+                                               (errcode(ERRCODE_SYNTAX_ERROR),
+                                                errmsg("conflicting
or redundant options")));
+                       twophase_given = true;
+
+                       *enable_twophase = defGetBoolean(defel);
+               }

We have the following check in parse_subscription_options:
if (twophase && *twophase_given && *twophase)
{
if (streaming && *streaming_given && *streaming)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("%s and %s are mutually exclusive options",
"two_phase = true", "streaming = true")));
}
Should we have a similar check in parse_output_parameters.

Regards,
Vignesh

#248Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#247)
4 attachment(s)

Please find attached the latest patch set v58*

Differences from v57* are:

* Rebased to HEAD @ today

* Addresses the following feedback issues:

(15) [ak-0301] Done. DROP SUBSCRIPTION cleans up any psf files related
to the subscription

* Bugs fixed:

- the psf proc-exit handler is now only registered for apply workers

- the apply_handle_type was missing call to prepare_spoolfile_handler

-----
[ak-0301] /messages/by-id/CAA4eK1J=i16+DVpdkBjzgWQazYwVdcMJWQF0RAeCgLkCxm40=A@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v58-0004-Fix-apply-worker-empty-prepare-dev-logs.patchapplication/octet-stream; name=v58-0004-Fix-apply-worker-empty-prepare-dev-logs.patch
v58-0002-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v58-0002-Support-2PC-txn-subscriber-tests.patch
v58-0003-Support-2PC-txn-Subscription-option.patchapplication/octet-stream; name=v58-0003-Support-2PC-txn-Subscription-option.patch
v58-0001-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v58-0001-Add-support-for-apply-at-prepare-time-to-built-i.patch
#249Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#247)

On Fri, Mar 12, 2021 at 4:07 PM vignesh C <vignesh21@gmail.com> wrote:

Hi Vignesh,

Thanks for the review comments.

But can you please resend it with each feedback enumerated as 1. 2.
3., or have some other clear separation for each comment.

(Because everything is mushed together I am not 100% sure if your
comment text applies to the code above or below it)

TIA.

----
Kind Regards,
Peter Smith.
Fujitsu Australia

#250vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#249)

On Fri, Mar 12, 2021 at 2:29 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Fri, Mar 12, 2021 at 4:07 PM vignesh C <vignesh21@gmail.com> wrote:

Hi Vignesh,

Thanks for the review comments.

But can you please resend it with each feedback enumerated as 1. 2.
3., or have some other clear separation for each comment.

(Because everything is mushed together I am not 100% sure if your
comment text applies to the code above or below it)

1) I felt twophase_given can be a local variable, it need not be added
as a function parameter as it is not used outside the function.
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -67,7 +67,8 @@ parse_subscription_options(List *options,
                                                   char **synchronous_commit,
                                                   bool *refresh,
                                                   bool *binary_given,
bool *binary,
-                                                  bool
*streaming_given, bool *streaming)
+                                                  bool
*streaming_given, bool *streaming,
+                                                  bool
*twophase_given, bool *twophase)

The corresponding changes should be done here too:
@@ -358,6 +402,8 @@ CreateSubscription(CreateSubscriptionStmt *stmt,
bool isTopLevel)
bool copy_data;
bool streaming;
bool streaming_given;
+ bool twophase;
+ bool twophase_given;
char *synchronous_commit;
char *conninfo;
char *slotname;
@@ -382,7 +428,8 @@ CreateSubscription(CreateSubscriptionStmt *stmt,
bool isTopLevel)
&synchronous_commit,
NULL,
/* no "refresh" */

&binary_given, &binary,
-
&streaming_given, &streaming);
+
&streaming_given, &streaming,
+
&twophase_given, &twophase);

2) I think this is not possible as we don't allow changing twophase
option, should this be an assert.
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -2930,6 +2930,7 @@ maybe_reread_subscription(void)
                strcmp(newsub->slotname, MySubscription->slotname) != 0 ||
                newsub->binary != MySubscription->binary ||
                newsub->stream != MySubscription->stream ||
+               newsub->twophase != MySubscription->twophase ||
                !equal(newsub->publications, MySubscription->publications))

3) We have the following check in parse_subscription_options:
if (twophase && *twophase_given && *twophase)
{
if (streaming && *streaming_given && *streaming)
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
errmsg("%s and %s are mutually exclusive options",
"two_phase = true", "streaming = true")));
}

Should we have a similar check in parse_output_parameters?
@@ -252,6 +254,16 @@ parse_output_parameters(List *options, uint32
*protocol_version,

                        *enable_streaming = defGetBoolean(defel);
                }
+               else if (strcmp(defel->defname, "two_phase") == 0)
+               {
+                       if (twophase_given)
+                               ereport(ERROR,
+                                               (errcode(ERRCODE_SYNTAX_ERROR),
+                                                errmsg("conflicting
or redundant options")));
+                       twophase_given = true;
+
+                       *enable_twophase = defGetBoolean(defel);
+               }

Regard,
Vignesh

#251osumi.takamichi@fujitsu.com
osumi.takamichi@fujitsu.com
osumi.takamichi@fujitsu.com
In reply to: Peter Smith (#248)
RE: [HACKERS] logical decoding of two-phase transactions

Hi

On Friday, March 12, 2021 5:40 PM Peter Smith <smithpb2250@gmail.com>

Please find attached the latest patch set v58*

Thank you for updating those. I'm testing the patchset
and I think it's preferable that you add simple two types of more tests in 020_twophase.pl
because those aren't checked by v58.

(1) execute single PREPARE TRANSACTION
which affects several tables (connected to corresponding publications)
at the same time and confirm they are synced correctly.

(2) execute single PREPARE TRANSACTION which affects multiple subscribers
and confirm they are synced correctly.
This doesn't mean cascading standbys like 022_twophase_cascade.pl.
Imagine that there is one publisher and two subscribers to it.

In my env, I checked those and the results were fine, though.

Best Regards,
Takamichi Osumi

#252wangsh.fnst@fujitsu.com
wangsh.fnst@fujitsu.com
wangsh.fnst@fujitsu.com
In reply to: osumi.takamichi@fujitsu.com (#251)
RE: [HACKERS] logical decoding of two-phase transactions

Hi,

I noticed in patch v58-0001-Add-support-for-apply-at-prepare-time-to-built-i.patch

+static void
+prepare_spoolfile_name(char *path, int szpath, Oid subid, char *gid)
+{
+	PsfHashEntry *hentry;
+
+	/*
+	 * This name is used as the key in the psf_hash HTAB. Therefore, the name
+	 * and the key must be exactly same lengths and padded with '\0' so
+	 * garbage does not impact the HTAB lookups.
+	 */
+	Assert(sizeof(hentry->name) == MAXPGPATH);
+	Assert(szpath == MAXPGPATH);
+	memset(path, '\0', MAXPGPATH);
+
+	snprintf(path, MAXPGPATH, "%s/psf_%u_%s.changes", PSF_DIR, subid, gid);
+}

The variable hentry is only used when --enable-cassert is specified, it will be a warning if I don't specify the
--enable-cassert when execute configure

And the comment says the lengths are same, I think ' Assert(sizeof(hentry->name) == szpath) ' will be better.

Best regards.
Shenhao Wang

#253Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#248)

On Fri, Mar 12, 2021 at 2:09 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v58*

In this patch-series, I see a problem with synchronous replication
when GUC 'synchronous_standby_names' is configured to use subscriber.
This will allow Prepares and Commits to wait for the subscriber to
finish. Before this patch, we never send prepare as two-phase was not
enabled by a subscriber, so it won't wait for it, rather it will make
progress because we send keep_alive messages. But after this patch, it
will start waiting for Prepare to finish. Now, without spool-file
logic, it will work because prepares are decoded on subscriber and a
corresponding ack will be sent to a publisher but for the spool-file
case, we will wait for Publisher to send commit prepared and in
publisher prepare is not finished because we are waiting for its ack.
So, it will create a sort of deadlock. This is related to the problem
as mentioned in the below comments in the patch:
+ * A future release may be able to detect when all tables are READY and set
+ * a flag to indicate this subscription/slot is ready for two_phase
+ * decoding. Then at the publisher-side, we could enable wait-for-prepares
+ * only when all the slots of WALSender have that flag set.

The difference is that it can happen now itself, prepares
automatically wait if 'synchronous_standby_names' is set. Now, we can
imagine a solution where after spooling to file the changes which
can't be applied during syncup phase, we update the flush location so
that publisher can proceed with that prepare. But I think that won't
work because once we have updated the flush location those prepares
won't be sent again and it is quite possible that we don't have
complete relation information as the schema is not sent with each
transaction. Now, we can go one step further and try to remember the
schema information the first time it is sent so that it can be reused
after restart but I think that will complicate the patch and overall
design.

I think there is a simpler solution to these problems. The idea is to
enable two_phase after the initial sync is over (all relations are in
a READY state). If we switch-on the 2PC only after all the relations
come to the READY state then we shouldn't get any prepare before
sync-point. However, it is quite possible that before reaching
syncpoint, the slot corresponding to apply-worker has skipped because
2PC was not enabled, and afterward, prepare would be skipped because
by that start_decoding_at might have moved. See the explanation in an
email: /messages/by-id/CAA4eK1LuK4t-ZYYCY7k9nMoYP+dwi-JyqUdtcffQMoB_g5k6Hw@mail.gmail.com.
Now, even the initial_consistent_point won't help because for
apply-worker, it will be different from tablesync slot's
initial_consistent_point and we would have reached initial consistency
earlier for apply-workers.

To solve the main problem (how to detect the prepares that are skipped
when we toggled the two_pc option) in the above idea, we can mark an
LSN position in the slot (two_phase_at, this will be the same as
start_decoding_at point when we receive slot with 2PC option) where we
enable two_pc. If we encounter any commit prepared whose prepare LSN
is less than two_phase_at, then we need to send prepare for the
transaction along with commit prepared.

For this solution on the subscriber-side, I think we need a tri-state
column (two_phase) in pg_subscription. It can have three values
'disable', 'can_enable', 'enable'. By default, it will be 'disable'.
If the user enables 2PC, then we can set it to 'can_enable' and once
we see all relations are in a READY state, restart the apply-worker
and this time while starting the streaming, send the two_pc option and
then we can change the state to 'enable' so that future restarts won't
send this option again. Now on the publisher side, if this option is
present, it will change the value of two_phase_at in the slot to
start_decoding_at. I think something on these lines should be much
easier than the spool-file implementation unless we see any problem
with this idea.

--
With Regards,
Amit Kapila.

#254Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: wangsh.fnst@fujitsu.com (#252)

On Sun, Mar 14, 2021 at 1:52 PM wangsh.fnst@fujitsu.com
<wangsh.fnst@fujitsu.com> wrote:

Hi,

I noticed in patch v58-0001-Add-support-for-apply-at-prepare-time-to-built-i.patch

+static void
+prepare_spoolfile_name(char *path, int szpath, Oid subid, char *gid)
+{
+     PsfHashEntry *hentry;
+
+     /*
+      * This name is used as the key in the psf_hash HTAB. Therefore, the name
+      * and the key must be exactly same lengths and padded with '\0' so
+      * garbage does not impact the HTAB lookups.
+      */
+     Assert(sizeof(hentry->name) == MAXPGPATH);
+     Assert(szpath == MAXPGPATH);
+     memset(path, '\0', MAXPGPATH);
+
+     snprintf(path, MAXPGPATH, "%s/psf_%u_%s.changes", PSF_DIR, subid, gid);
+}

The variable hentry is only used when --enable-cassert is specified, it will be a warning if I don't specify the
--enable-cassert when execute configure

And the comment says the lengths are same, I think ' Assert(sizeof(hentry->name) == szpath) ' will be better.

Thanks for your feedback comment.

But today Amit suggested [ak0315] that the current psf logic should
all be replaced, after which the function you commented about will no
longer exist.

----
[ak0315] /messages/by-id/CAA4eK1LVEdPYnjdajYzu3k6KEii1+F0jdQ6sWnYugiHcSGZD6Q@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

#255Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#253)
3 attachment(s)

On Mon, Mar 15, 2021 at 2:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I think something on these lines should be much
easier than the spool-file implementation unless we see any problem
with this idea.

Here's a new patch-set that implements this new solution proposed by Amit.
Patchset-v60 implements:
* renamed initial_consistent_point to two_phase_at and set it when a stream
is started with two_phase on or slot is created with two_phase on.
* replication slots are created with two_phase off on start.
* start stream with two_phase on only after all tables are in READY state.
* Initially the two_phase parameter of the subscription defaults to PENDING
and is only enabled once all tables are in READY state.
* restrict REFRESH PUBLICATION with copy = true on subscriptions with
two_phase enabled.
* documentation updates

Pending work:
* add documentation for START REPLICATION syntax change.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v60-0002-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v60-0002-Support-2PC-txn-subscriber-tests.patch
v60-0003-Fix-apply-worker-dev-logs.patchapplication/octet-stream; name=v60-0003-Fix-apply-worker-dev-logs.patch
v60-0001-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v60-0001-Add-support-for-apply-at-prepare-time-to-built-i.patch
#256vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Ajin Cherian (#255)

On Mon, Mar 15, 2021 at 6:14 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Mon, Mar 15, 2021 at 2:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I think something on these lines should be much
easier than the spool-file implementation unless we see any problem
with this idea.

Here's a new patch-set that implements this new solution proposed by Amit.

Thanks for the updated patch.
Few comments:
1) These are no longer needed as it has been removed with the new changes.
@@ -1959,6 +1962,8 @@ ProtocolVersion
PrsStorage
PruneState
PruneStepResult
+PsfFile
+PsfHashEntry

2) "Binary mode and streaming and two_phase" should be "Binary mode,
streaming and two_phase" in the below code:
@@ -6097,13 +6097,15 @@ describeSubscriptions(const char *pattern, bool verbose)

        if (verbose)
        {
-               /* Binary mode and streaming are only supported in v14
and higher */
+               /* Binary mode and streaming and two_phase are only
supported in v14 and higher */
                if (pset.sversion >= 140000)
                        appendPQExpBuffer(&buf,
3) We have some reference to psf spoolfile, this should be removed.
Also check if the assert should be <= or ==.
+       /*
+        * Normally, prepare_lsn == remote_final_lsn, but if this
prepare message
+        * was dispatched via the psf spoolfile replay then the remote_final_lsn
+        * is set to commit lsn instead. Hence the <= instead of == check below.
+        */
+       Assert(prepare_data.prepare_lsn <= remote_final_lsn);
4) Similarly in below code:
+       /*
+        * It is possible that we haven't received prepare because it occurred
+        * before walsender reached a consistent point in which case we need to
+        * skip rollback prepared.
+        *
+        * And we also skip the FinishPreparedTransaction if we're using the
+        * Prepare Spoolfile (using_psf) because in that case there is
no matching
+        * PrepareTransactionBlock done yet.
+        */
+       if (LookupGXact(rollback_data.gid, rollback_data.prepare_end_lsn,
+                                       rollback_data.preparetime))
+       {
5) Should this be present:
+#if 1
+       /* This is just debugging, for confirmation the update worked. */
+       {
+               Subscription *new_s;
+
+               StartTransactionCommand();
+               new_s = GetSubscription(MySubscription->oid, false);
+               CommitTransactionCommand();
+       }
+#endif

Regards,
Vignesh

#257Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#255)

On Mon, Mar 15, 2021 at 6:14 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Mon, Mar 15, 2021 at 2:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I think something on these lines should be much
easier than the spool-file implementation unless we see any problem
with this idea.

Here's a new patch-set that implements this new solution proposed by Amit.
Patchset-v60 implements:

I have reviewed the latest patch and below are my comments, some of
these might overlap with Vignesh's as I haven't looked at his comments
in detail.
Review comments
================
1.
+ * And we also skip the FinishPreparedTransaction if we're using the
+ * Prepare Spoolfile (using_psf) because in that case there is no matching
+ * PrepareTransactionBlock done yet.
+ */
+ if (LookupGXact(rollback_data.gid, rollback_data.prepare_end_lsn,
+ rollback_data.preparetime))

The above comment is not required.

2.
While streaming and two_phase can theoretically be supported,
+ * the current implementation has some issues that could lead to a
+ * streaming prepared transaction to be incorrectly missed in the initial
+ * syncing phase. Hence, disabling this combination till that issue can
+ * be addressed.
+ */
+ if (twophase && *twophase_given && *twophase)

I don't think the above statement is correct as per the current patch.
We can something like: "While streaming and two_phase can
theoretically be supported, it needs more analysis to allow them
together." or something on those lines.

3.
-
- walrcv_create_slot(wrconn, slotname, false,
+ /*
+ * Even if two_phase is set, don't create the slot with two-phase
+ * enabled. Will enable it once all the tables are synced and ready.
+ * This avoids race-conditions that might occur during initial table-sync.
+ */
+ walrcv_create_slot(wrconn, slotname, false, false,
     CRS_NOEXPORT_SNAPSHOT, NULL);

Can we please explain a bit more about race conditions due to which we
can enable two_phase only after initial sync?

4.
@@ -648,7 +703,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
  InvalidXLogRecPtr);
  ereport(DEBUG1,
  (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
- rv->schemaname, rv->relname, sub->name)));
+ rv->schemaname, rv->relname, sub->name)));
..
..
@@ -722,9 +777,9 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
  ereport(DEBUG1,
  (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
- get_namespace_name(get_rel_namespace(relid)),
- get_rel_name(relid),
- sub->name)));
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));

Is there any reason for the above changes w.r.t this patch?

5.
+
+ /*
+ * The subscription two_phase commit implementation requires
+ * that replication has passed the initial table
+ * synchronization phase before the two_phase becomes properly
+ * enabled.
+ *
+ * But, having reached this two-phase commit "enabled" state we
+ * must not allow any subsequent table initialization to occur.
+ * So the ALTER SUBSCRIPTION ... REFRESH is disallowed when the
+ * the user had requested two_phase = on mode.

I suggest we expand the comments more here to specify what problem can
happen if we allow subsequent table initialization after the two_phase
is enabled for the subscription. Or you can point to comments atop
worker.c.

6.
@@ -526,6 +527,20 @@ CreateDecodingContext(XLogRecPtr start_lsn,
start_lsn = slot->data.confirmed_flush;
}

+ /*
+ * If starting with two_phase enabled then set two_phase_at point.
+ * Also update the slot to be two_phase enabled and save the slot
+ * to disk.
+ */
+ if (two_phase)
+ {
+ slot->data.two_phase_at = start_lsn;
+ slot->data.two_phase = true;
+ ReplicationSlotMarkDirty();
+ ReplicationSlotSave();
+ }

Do we want to Assert that two_phase variables are not already set as
we don't want those to be reset?

7.
/*
- * We allow decoding of prepared transactions iff the two_phase option is
- * enabled at the time of slot creation.
+ * We allow decoding of prepared transactions if the two_phase option is
+ * enabled at the time of slot creation or at restart.
  */

In the above comments, there is no need to change iff to if. iff means
'if and only if' which makes sense in the above comment.

- ctx->twophase &= MyReplicationSlot->data.two_phase;
+ ctx->twophase = slot->data.two_phase || two_phase;

Why you have removed '&' in the above assignment? It is possible that
the plugin doesn't provide two_phase APIs in which case we can't
support two_phase even if asked by the user? I think we need to
probably write it as: ctx->twophase &= (slot->data.two_phase ||
two_phase);

8.
@@ -602,7 +617,7 @@ DecodingContextFindStartpoint(LogicalDecodingContext *ctx)

  SpinLockAcquire(&slot->mutex);
  slot->data.confirmed_flush = ctx->reader->EndRecPtr;
- slot->data.initial_consistent_point = ctx->reader->EndRecPtr;
+ slot->data.two_phase_at = ctx->reader->EndRecPtr;
  SpinLockRelease(&slot->mutex);

I think we need to set the two_phase_at only when the slot has
two_phase enabled? Previously, it was fine to set it because it was a
generic initial consistent point for a slot but after changing the
variable name it doesn't seem to make sense to assign it unless
two_phase is enabled.

9.
* needs to be sent later along with commit prepared and they must be
* before this point.
*/
- XLogRecPtr initial_consistent_point;
+ XLogRecPtr two_phase_at;

I think the explanation of this variable needs to be also updated
because now this can be used even for the first time when we enable
two_phase during streaming start.

10.
 ReorderBufferFinishPrepared(ReorderBuffer *rb, TransactionId xid,
  XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
- XLogRecPtr initial_consistent_point,
+ XLogRecPtr two_phase_at,
  TimestampTz commit_time, RepOriginId origin_id,
  XLogRecPtr origin_lsn, char *gid, bool is_commit)
 {
@@ -2703,7 +2703,7 @@ ReorderBufferFinishPrepared(ReorderBuffer *rb,
TransactionId xid,
  * prepare if it was not decoded earlier. We don't need to decode the xact
  * for aborts if it is not done already.
  */
- if ((txn->final_lsn < initial_consistent_point) && is_commit)
+ if ((txn->final_lsn < two_phase_at) && is_commit)

How can this change work? During decode prepare processing the patch
only remembers the prepare info in DecodePrepare whereas we would have
skipped the prepare before that via FilterPrepare. I think we need to
remember the prepare info before calling DecodePrepare. If you have
not already tested this scenario then please test it once before
posting the next version and also explain how exactly you have tested
it?

11.
+/*
+ * Are all tablesyncs READY?
+ */
+bool
+AllTablesyncsREADY(void)
+{
+ return !AnyTablesyncsNotREADY();
+}
+
+/*
+ * Are there any tablesyncs which are not yet READY?
+ */
+static bool
+AnyTablesyncsNotREADY(void)
+{

I don't think we need separate functions here.

12.
+/*
+ * Update the p_subscription two_phase tri-state of the current subscription.
+ */
+void
+UpdateTwoPhaseTriState(char new_tristate)

I would prefer not to include 'Tri' in the above function or variable
name. We might want to extend the states in future or even without
that it would be better not to include 'tri' here.

13.
+void
+UpdateTwoPhaseTriState(char new_tristate)
{
..
+#if 1
+ /* This is just debugging, for confirmation the update worked. */
+ {
+ Subscription *new_s;
+
+ StartTransactionCommand();
+ new_s = GetSubscription(MySubscription->oid, false);
+ CommitTransactionCommand();
+ }
+#endif
..
}

Let's remove the debugging code from the main patch.

14.
/*
+ * Even when the two_phase mode is requested by the user, it remains as
+ * the tri-state PENDING until all tablesyncs have reached READY state.
+ * Only then, can it become properly ENABLED.
+ */
+ bool all_tables_ready = AllTablesyncsREADY();
+ if (MySubscription->twophase == LOGICALREP_TWOPHASE_STATE_PENDING &&
all_tables_ready)
+ {
+ /* Start streaming with two_phase enabled */
+ walrcv_startstreaming(wrconn, &options, true);
+ UpdateTwoPhaseTriState(LOGICALREP_TWOPHASE_STATE_ENABLED);
+ MySubscription->twophase = LOGICALREP_TWOPHASE_STATE_ENABLED;
+ }
+ else
+ {
+ walrcv_startstreaming(wrconn, &options, false);
+ }
+
+ ereport(LOG,
+ (errmsg("logical replication apply worker for subscription \"%s\" 2PC is %s.",
+ MySubscription->name,
+ MySubscription->twophase == LOGICALREP_TWOPHASE_STATE_DISABLED ? "DISABLED" :
+ MySubscription->twophase == LOGICALREP_TWOPHASE_STATE_PENDING ? "PENDING" :
+ MySubscription->twophase == LOGICALREP_TWOPHASE_STATE_ENABLED ? "ENABLED" :
+ "?")));

I think here two_phase code is relevant only if we are talking with
server version >= 14. You can check that by
"walrcv_server_version(wrconn) >= 140000".

15.
+static void
+FetchTableStates(bool *started_tx)
+{
+ *started_tx = false;
+
+ if (!table_states_valid)
+ {
+ MemoryContext oldctx;
+ List    *rstates;
+ ListCell   *lc;
+ SubscriptionRelState *rstate;
+
+
+ /* Clean the old lists. */
+ list_free_deep(table_states_all);
+ table_states_all = NIL;

The patch doesn't seem to be using table_states_all, it might be
leftover from the previous version. Also, the logic in this function
can simply use GetSubscriptionNotReadyRelations as the existing code
is using.

16.
+static bool
+AnyTablesyncsNotREADY(void)
+{
+ bool found_busy = false;
+ bool started_tx = false;
+ int count = 0;
+ ListCell   *lc;
+
+ /* We need up-to-date sync state info for subscription tables here. */
+ FetchTableStates(&started_tx);
+
+ /*
+ * Process all not-READY tables to see if any are also not-SYNCDONE
+ */
+ foreach(lc, table_states_not_ready)
+ {
+ SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);
+
+ count++;
+ /*
+ * When the process_syncing_tables_for_apply changes the state from
+ * SYNCDONE to READY, that change is actually written directly into
+ * the list element of table_states_not_ready.
+ *
+ * So the "table_states_not_ready" list might end up having a READY
+ * state in it even though there was none when it was initially
+ * created. This is reason why we need to check for READY below.
+ */
+ if (rstate->state != SUBREL_STATE_READY)
+ {
+ found_busy = true;
+ break;
+ }
+ }

Do we really need to do this recheck in for loop? How does it matter?
I guess if this is not required, we can simply check if
table_states_not_ready list is empty or not.

17.
+ ereport(LOG,
+ (errmsg("logical replication apply worker for subscription \"%s\"
will restart so 2PC can be enabled",

In the above message, I think it is better to write two_phase instead of 2PC.

18.
+/* Has this prepared transaction been committed? */
+#define rbtxn_commit_prepared(txn) \
+( \
+ ((txn)->txn_flags & RBTXN_COMMIT_PREPARED) != 0 \
+)
+
+/* Has this prepared transaction been rollbacked? */
+#define rbtxn_rollback_prepared(txn) \
+( \
+ ((txn)->txn_flags & RBTXN_ROLLBACK_PREPARED) != 0 \
+)
+

Are these macros used anywhere? If not, please remove them.

--
With Regards,
Amit Kapila.

#258vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Ajin Cherian (#255)

On Mon, Mar 15, 2021 at 6:14 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Mon, Mar 15, 2021 at 2:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I think something on these lines should be much
easier than the spool-file implementation unless we see any problem
with this idea.

Here's a new patch-set that implements this new solution proposed by Amit.

Another couple of comments:
1) Should Assert be changed to the following in the below code:
if (!HeapTupleIsValid(tup))
elog(ERROR, "cache lookup failed for subscription %u", MySubscription->oid);

+       rel = table_open(SubscriptionRelationId, RowExclusiveLock);
+       tup = SearchSysCacheCopy1(SUBSCRIPTIONOID,
ObjectIdGetDatum(MySubscription->oid));
+       Assert(HeapTupleIsValid(tup));
2) table_states_not_ready global variable is used immediately after
call to FetchTableStates, we can make FetchTableStates return the
value or get it as an argument to the function and the global
variables can be removed.
+static List *table_states_not_ready = NIL;
+static List *table_states_all = NIL;

Regards,
Vignesh

#259Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#258)

On Tue, Mar 16, 2021 at 6:22 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, Mar 15, 2021 at 6:14 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Mon, Mar 15, 2021 at 2:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

2) table_states_not_ready global variable is used immediately after
call to FetchTableStates, we can make FetchTableStates return the
value or get it as an argument to the function and the global
variables can be removed.
+static List *table_states_not_ready = NIL;

But we do update the states in the list table_states_not_ready in
function process_syncing_tables_for_apply. So, the current arrangement
looks good to me.

--
With Regards,
Amit Kapila.

#260vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#259)

On Tue, Mar 16, 2021 at 7:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Mar 16, 2021 at 6:22 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, Mar 15, 2021 at 6:14 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Mon, Mar 15, 2021 at 2:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

2) table_states_not_ready global variable is used immediately after
call to FetchTableStates, we can make FetchTableStates return the
value or get it as an argument to the function and the global
variables can be removed.
+static List *table_states_not_ready = NIL;

But we do update the states in the list table_states_not_ready in
function process_syncing_tables_for_apply. So, the current arrangement
looks good to me.

But I felt we can do this without using global variables.
table_states_not_ready is used immediately after calling
FetchTableStates in AnyTablesyncsNotREADY and
process_syncing_tables_for_apply functions. It is not used anywhere
else. My point was we do not need to store this in global variables as
it is not needed elsewhere. We could change the return type or return
in through the function argument in this case.
Thoughts?

Regards,
Vignesh

#261Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#260)

On Wed, Mar 17, 2021 at 8:07 AM vignesh C <vignesh21@gmail.com> wrote:

On Tue, Mar 16, 2021 at 7:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Mar 16, 2021 at 6:22 PM vignesh C <vignesh21@gmail.com> wrote:

On Mon, Mar 15, 2021 at 6:14 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Mon, Mar 15, 2021 at 2:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

2) table_states_not_ready global variable is used immediately after
call to FetchTableStates, we can make FetchTableStates return the
value or get it as an argument to the function and the global
variables can be removed.
+static List *table_states_not_ready = NIL;

But we do update the states in the list table_states_not_ready in
function process_syncing_tables_for_apply. So, the current arrangement
looks good to me.

But I felt we can do this without using global variables.
table_states_not_ready is used immediately after calling
FetchTableStates in AnyTablesyncsNotREADY and
process_syncing_tables_for_apply functions. It is not used anywhere
else. My point was we do not need to store this in global variables as
it is not needed elsewhere.

It might be possible but I am not if that is better than what we are
currently doing and moreover that is existing code and this patch has
just encapsulated in a function. Even if you think there is a better
way which I doubt, we can probably look at it as a separate patch.

--
With Regards,
Amit Kapila.

#262Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#257)
1 attachment(s)

On Tue, Mar 16, 2021 at 5:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Mar 15, 2021 at 6:14 PM Ajin Cherian <itsajin@gmail.com> wrote:

Here's a new patch-set that implements this new solution proposed by Amit.
Patchset-v60 implements:

I have reviewed the latest patch and below are my comments, some of
these might overlap with Vignesh's as I haven't looked at his comments
in detail.
Review comments
================

Few more comments:
=================
1.
+       <structfield>subtwophase</structfield> <type>char</type>
+      </para>
+      <para>
+       The <varname>two_phase commit current state:</varname>
+       <itemizedlist>
+        <listitem><para><literal>'n'</literal> = two_phase mode was
not requested, so is disabled.</para></listitem>
+        <listitem><para><literal>'p'</literal> = two_phase mode was
requested, but is pending enablement.</para></listitem>
+        <listitem><para><literal>'y'</literal> = two_phase mode was
requested, and is enabled.</para></listitem>
+       </itemizedlist>
+      </para></entry>
+     </row>

Can we name the column as subtwophasestate? And then describe as we
are doing for srsubstate in pg_subscription_rel. Also, it might be
better to keep names as: 'd' disabled, 'p' pending twophase enablement
and 'e' twophase enabled.
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>srsubstate</structfield> <type>char</type>
</para>
<para>
State code:
<literal>i</literal> = initialize,
<literal>d</literal> = data is being copied,
<literal>f</literal> = finished table copy,
<literal>s</literal> = synchronized,
<literal>r</literal> = ready (normal replication)
</para></entry>
</row>

2.
@@ -427,6 +428,10 @@ libpqrcv_startstreaming(WalReceiverConn *conn,
PQserverVersion(conn->streamConn) >= 140000)
appendStringInfoString(&cmd, ", streaming 'on'");

+ if (options->proto.logical.twophase &&
+ PQserverVersion(conn->streamConn) >= 140000)
+ appendStringInfoString(&cmd, ", two_phase 'on'");
+
  pubnames = options->proto.logical.publication_names;
  pubnames_str = stringlist_to_identifierstr(conn->streamConn, pubnames);
  if (!pubnames_str)
@@ -453,6 +458,9 @@ libpqrcv_startstreaming(WalReceiverConn *conn,
  appendStringInfo(&cmd, " TIMELINE %u",
  options->proto.physical.startpointTLI);
+ if (options->logical && two_phase)
+ appendStringInfoString(&cmd, " TWO_PHASE");
+

Why are we sending two_phase 'on' and " TWO_PHASE" separately? I think
we don't need to introduce TWO_PHASE token in grammar, let's handle it
via plugin_options similar to what we do for 'streaming'. Also, a
similar change would be required for Create_Replication_Slot.

3.
+ /*
+ * Do not allow toggling of two_phase option, this could
+ * cause missing of transactions and lead to an inconsistent
+ * replica.
+ */
+ if (!twophase)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("cannot alter two_phase option")));
+

I think here you can either give reference of worker.c to explain how
this could lead to an inconsistent replica or expand the comments here
if the information is not present elsewhere.

4.
  * support for streaming large transactions.
+ *
+ * LOGICALREP_PROTO_2PC_VERSION_NUM is the minimum protocol version with
+ * support for two-phase commit PREPARE decoding.
  */
 #define LOGICALREP_PROTO_MIN_VERSION_NUM 1
 #define LOGICALREP_PROTO_VERSION_NUM 1
 #define LOGICALREP_PROTO_STREAM_VERSION_NUM 2
+#define LOGICALREP_PROTO_2PC_VERSION_NUM 2

I think it is better to name the new define as
LOGICALREP_PROTO_TWOPHASE_VERSION_NUM. Also mention in comments in
some way that we are keeping the same version number for stream and
two-phase defines because they got introduced in the same release
(14).

5. I have modified the comments atop worker.c to explain the design
and some of the problems clearly. See attached. If you are fine with
this, please include it in the next version of the patch.

--
With Regards,
Amit Kapila.

Attachments:

change_two_phase_desc_1.patchapplication/octet-stream; name=change_two_phase_desc_1.patch
#263Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#250)

On Fri, Mar 12, 2021 at 8:38 PM vignesh C <vignesh21@gmail.com> wrote:

...

1) I felt twophase_given can be a local variable, it need not be added
as a function parameter as it is not used outside the function.
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -67,7 +67,8 @@ parse_subscription_options(List *options,
char **synchronous_commit,
bool *refresh,
bool *binary_given,
bool *binary,
-                                                  bool
*streaming_given, bool *streaming)
+                                                  bool
*streaming_given, bool *streaming,
+                                                  bool
*twophase_given, bool *twophase)

The corresponding changes should be done here too:
@@ -358,6 +402,8 @@ CreateSubscription(CreateSubscriptionStmt *stmt,
bool isTopLevel)
bool copy_data;
bool streaming;
bool streaming_given;
+ bool twophase;
+ bool twophase_given;
char *synchronous_commit;
char *conninfo;
char *slotname;
@@ -382,7 +428,8 @@ CreateSubscription(CreateSubscriptionStmt *stmt,
bool isTopLevel)
&synchronous_commit,
NULL,
/* no "refresh" */

&binary_given, &binary,
-
&streaming_given, &streaming);
+
&streaming_given, &streaming,
+
&twophase_given, &twophase);

It was deliberately coded this way for consistency with the other new
PG14 options - e.g. it mimics exactly binary_given, and
streaming_given.

I know the param is not currently used by the caller and so could be a
local (as you say), but I felt the code consistency and future-proof
benefits outweighed the idea of reducing the code to bare minimum
required to work just "because we can".

So I don't plan to change this, but if you still feel strongly that
the parameter must be removed please give a convincing reason.

----
Kind Regards,
Peter Smith.
Fujitsu Australia.

#264Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#262)
1 attachment(s)

On Wed, Mar 17, 2021 at 11:27 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

5. I have modified the comments atop worker.c to explain the design
and some of the problems clearly. See attached. If you are fine with
this, please include it in the next version of the patch.

I have further expanded these comments to explain the handling of
prepared transactions for multiple subscriptions on the same server
especially when the same prepared transaction operates on tables for
those subscriptions. See attached, this applies atop the patch sent by
me in the last email. I am not sure but I think it might be better to
add something on those lines in user-facing docs. What do you think?

Another comment:
+ ereport(LOG,
+ (errmsg("logical replication apply worker for subscription \"%s\" 2PC is %s.",
+ MySubscription->name,
+ MySubscription->twophase == LOGICALREP_TWOPHASE_STATE_DISABLED ? "DISABLED" :
+ MySubscription->twophase == LOGICALREP_TWOPHASE_STATE_PENDING ? "PENDING" :
+ MySubscription->twophase == LOGICALREP_TWOPHASE_STATE_ENABLED ? "ENABLED" :
+ "?")));

I don't think this is required in LOGs, maybe at some DEBUG level,
because users can check this in pg_subscription. If we keep this
message, there will be two consecutive messages like below in logs for
subscriptions that have two_pc option enabled which looks a bit odd.
LOG: logical replication apply worker for subscription "mysub" has started
LOG: logical replication apply worker for subscription "mysub" 2PC is ENABLED.

--
With Regards,
Amit Kapila.

Attachments:

change_two_phase_desc_2.patchapplication/octet-stream; name=change_two_phase_desc_2.patch
#265Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Ajin Cherian (#255)
3 attachment(s)

Please find attached the latest patch set v61*

Differences from v60* are:

* Rebased to HEAD @ today

* Addresses the following feedback issues:

----

Vignesh 12/Mar -
/messages/by-id/CALDaNm1p=KYcDc1s_Q0Lk2P8UYU-z4acW066gaeLfXvW_O-kBA@mail.gmail.com

(61) Skipped. twophase_given could be a local variable

----

Vignesh 16/Mar -
/messages/by-id/CALDaNm0qTRapggmUY_kgwNd14cec0i8mS5_PnrMcs_Y-_TXrgA@mail.gmail.com

(68) Fixed. Removed obsolete psf typedefs from typedefs.h.

(69) Done. Updated comment wording.

(70) Fixed. Removed references to psf in comments. Restored the Assert
how it was before

(71) Duplicate. See (73)

(72) Duplicate. See (86)

----

Amit 16/Mar - /messages/by-id/CAA4eK1Kwah+MimFMR3jPY5cSqpGFVh5zfV2g4=gTphaPsacoLw@mail.gmail.com

(73) Done. Removed comments referring to obsolete psf.

(76) Done. Removed whitespace changes unrelated to this patch set.

(77) Done. Updated comment of Alter Subscription ... REFRESH.

(84) Done. Removed the extra function AnyTablesyncsNotREADY.

(85) Done. Renamed the function UpdateTwoPhaseTriState.

(86) Fixed. Removed debugging code from the main patch.

(88) Done. Removed the unused table_states_all List.

(90) Fixed. Change the log message to say "two_phase" instead of "2PC".

----

Vignesh 16/Mar -
/messages/by-id/CALDaNm11A5wL0E-GDtqWY00iFzgUPsPLfA+L0zi4SEokEVtoFQ@mail.gmail.com

(92) Fixed. Replace cache failure Assert with ERROR

(93) Skipped. Suggested to remove the global variable for
table_states_not_ready.

----

Amit 17/Mar - /messages/by-id/CAA4eK1LNLA20ci3_qqNQv7BYRTy3HqiAsOfuieqo6tJ2GeYuJw@mail.gmail.com

(95) Done. Renamed the pg_subscription column. New state values d/p/e.
Updated PG docs.

(98) Done. Renamed the constant LOGICALREP_PROTO_2PC_VERSION_NUM.

(99) Fixed. Apply new (supplied) comments atop worker.c

----

Vignesh 17/Mar

(100) Fixed. Applied patch (supplied) to fix a multiple subscriber bug.

-----
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v60-0001-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v60-0001-Add-support-for-apply-at-prepare-time-to-built-i.patch
v60-0002-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v60-0002-Support-2PC-txn-subscriber-tests.patch
v60-0003-Fix-apply-worker-dev-logs.patchapplication/octet-stream; name=v60-0003-Fix-apply-worker-dev-logs.patch
#266Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#265)
3 attachment(s)

On Thu, Mar 18, 2021 at 5:20 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v61*

Oops. Attaching the correct v61* patches this time...

---
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v61-0002-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v61-0002-Support-2PC-txn-subscriber-tests.patch
v61-0001-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v61-0001-Add-support-for-apply-at-prepare-time-to-built-i.patch
v61-0003-Fix-apply-worker-dev-logs.patchapplication/octet-stream; name=v61-0003-Fix-apply-worker-dev-logs.patch
#267Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Peter Smith (#266)
2 attachment(s)

On Thu, Mar 18, 2021 at 5:30 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Thu, Mar 18, 2021 at 5:20 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v61*

Please find attached the latest patch set v62

Differences from v61 are:

* Rebased to HEAD

* Addresses the following feedback issues:

Vignesh 12/Mar -
/messages/by-id/CALDaNm1p=KYcDc1s_Q0Lk2P8UYU-z4acW066gaeLfXvW_O-kBA@mail.gmail.com

(62) Fixed. Added assert for twophase alter check in
maybe_reread_subscription(void)

(63) Fixed. Changed parse_output_parameters to disable two-phase and
streaming combo

Amit 16 Mar -
/messages/by-id/CAA4eK1Kwah+MimFMR3jPY5cSqpGFVh5zfV2g4=gTphaPsacoLw@mail.gmail.com

(74) Fixed. Modify comment about why not supporting combination of
two-phase and streaming

(75) Fixed. Added more comments about creating slot with two-phase race
conditions

(78) Skipped. Adding assert for two-phase variables getting reset, the
logic has been changed, so skipping this.

(79) Changed. Reworded the comment about allowing decoding of prepared
transaction (restoring iff)

(80) Fixed. Added & in the assignment for ctx->twophase, logic is also
changed

(81) Fixed. Changed to conditional setting of two_phase_at only if
two_phase is enabled.

(82) Fixed. Better explanation for two_phase_at variable in
snapbuild.changed

(83) Skipped. The comparison in ReorderBufferFinishPrepared was not changed
and it was tested and it works.
The reason it works is because even if the Prepare is filtered when
two-phase is not enabled, once the tablessync is
over and the TABLES are in READY state, the apply worker and the walsender
restarts, and after restart, the prepare will be
not be filtered out, but will be marked as skipped prepare and also updated
in ReorderBufferRememberPrepareInfo

(87) Fixed. Added server version check before two-phase enabled startstream
in ApplyWorkerMain.

(91)Fixed. Removed unused macros in reorderbuffer.h

Amit 17/Mar -
/messages/by-id/CAA4eK1LNLA20ci3_qqNQv7BYRTy3HqiAsOfuieqo6tJ2GeYuJw@mail.gmail.com

(96) Fixed - Removed token for twophase in Start Replication slot, instead
used the twophase options. But kept the token
in Create_Replication slot, as we gave the option for plugins to enable
two-phase while creating a slot. This allows plugins without a
table-synchronization phase
to handle two-phase from the start.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v62-0002-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v62-0002-Support-2PC-txn-subscriber-tests.patch
v62-0003-Fix-apply-worker-dev-logs.patchapplication/octet-stream; name=v62-0003-Fix-apply-worker-dev-logs.patch
#268Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Ajin Cherian (#267)
3 attachment(s)

Missed the patch - 0001, resending.

On Thu, Mar 18, 2021 at 10:58 PM Ajin Cherian <itsajin@gmail.com> wrote:

Show quoted text

On Thu, Mar 18, 2021 at 5:30 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Thu, Mar 18, 2021 at 5:20 PM Peter Smith <smithpb2250@gmail.com>
wrote:

Please find attached the latest patch set v61*

Please find attached the latest patch set v62

Differences from v61 are:

* Rebased to HEAD

* Addresses the following feedback issues:

Vignesh 12/Mar -

/messages/by-id/CALDaNm1p=KYcDc1s_Q0Lk2P8UYU-z4acW066gaeLfXvW_O-kBA@mail.gmail.com

(62) Fixed. Added assert for twophase alter check in
maybe_reread_subscription(void)

(63) Fixed. Changed parse_output_parameters to disable two-phase and
streaming combo

Amit 16 Mar -
/messages/by-id/CAA4eK1Kwah+MimFMR3jPY5cSqpGFVh5zfV2g4=gTphaPsacoLw@mail.gmail.com

(74) Fixed. Modify comment about why not supporting combination of
two-phase and streaming

(75) Fixed. Added more comments about creating slot with two-phase race
conditions

(78) Skipped. Adding assert for two-phase variables getting reset, the
logic has been changed, so skipping this.

(79) Changed. Reworded the comment about allowing decoding of prepared
transaction (restoring iff)

(80) Fixed. Added & in the assignment for ctx->twophase, logic is also
changed

(81) Fixed. Changed to conditional setting of two_phase_at only if
two_phase is enabled.

(82) Fixed. Better explanation for two_phase_at variable in
snapbuild.changed

(83) Skipped. The comparison in ReorderBufferFinishPrepared was not
changed and it was tested and it works.
The reason it works is because even if the Prepare is filtered when
two-phase is not enabled, once the tablessync is
over and the TABLES are in READY state, the apply worker and the walsender
restarts, and after restart, the prepare will be
not be filtered out, but will be marked as skipped prepare and also
updated in ReorderBufferRememberPrepareInfo

(87) Fixed. Added server version check before two-phase enabled
startstream in ApplyWorkerMain.

(91)Fixed. Removed unused macros in reorderbuffer.h

Amit 17/Mar -
/messages/by-id/CAA4eK1LNLA20ci3_qqNQv7BYRTy3HqiAsOfuieqo6tJ2GeYuJw@mail.gmail.com

(96) Fixed - Removed token for twophase in Start Replication slot, instead
used the twophase options. But kept the token
in Create_Replication slot, as we gave the option for plugins to enable
two-phase while creating a slot. This allows plugins without a
table-synchronization phase
to handle two-phase from the start.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v62-0001-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v62-0001-Add-support-for-apply-at-prepare-time-to-built-i.patch
v62-0002-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v62-0002-Support-2PC-txn-subscriber-tests.patch
v62-0003-Fix-apply-worker-dev-logs.patchapplication/octet-stream; name=v62-0003-Fix-apply-worker-dev-logs.patch
#269osumi.takamichi@fujitsu.com
osumi.takamichi@fujitsu.com
osumi.takamichi@fujitsu.com
In reply to: osumi.takamichi@fujitsu.com (#251)
1 attachment(s)
RE: [HACKERS] logical decoding of two-phase transactions

Hi

On Saturday, March 13, 2021 5:01 PM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:

On Friday, March 12, 2021 5:40 PM Peter Smith <smithpb2250@gmail.com>

Please find attached the latest patch set v58*

Thank you for updating those. I'm testing the patchset and I think it's
preferable that you add simple two types of more tests in 020_twophase.pl
because those aren't checked by v58.

(1) execute single PREPARE TRANSACTION
which affects several tables (connected to corresponding
publications)
at the same time and confirm they are synced correctly.

(2) execute single PREPARE TRANSACTION which affects multiple
subscribers
and confirm they are synced correctly.
This doesn't mean cascading standbys like
022_twophase_cascade.pl.
Imagine that there is one publisher and two subscribers to it.

Attached a patch for those two tests. The patch works with v62.
I tested this in a loop more than 100 times and showed no failure.

Best Regards,
Takamichi Osumi

Attachments:

0001-add-2-types-of-new-tests-for-2PC.patchapplication/octet-stream; name=0001-add-2-types-of-new-tests-for-2PC.patch
#270Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#268)
3 attachment(s)

On Fri, Mar 19, 2021 at 5:03 AM Ajin Cherian <itsajin@gmail.com> wrote:

Missed the patch - 0001, resending.

I have made miscellaneous changes in the patch which includes
improving comments, error messages, and miscellaneous coding
improvements. The most notable one is that we don't need an additional
parameter in walrcv_startstreaming, if the two_phase option is set
properly. My changes are in v63-0002-Misc-changes-by-Amit, if you are
fine with those, then please merge them in the next version. I have
omitted the dev-logs patch but feel free to submit it. I have one
question:
@@ -538,10 +550,21 @@ CreateDecodingContext(XLogRecPtr start_lsn,
..
+ /* Set two_phase_at LSN only if it hasn't already been set. */
+ if (ctx->twophase && !MyReplicationSlot->data.two_phase_at)
+ {
+ MyReplicationSlot->data.two_phase_at = start_lsn;
+ slot->data.two_phase = true;
+ ReplicationSlotMarkDirty();
+ ReplicationSlotSave();
+ SnapBuildSetTwoPhaseAt(ctx->snapshot_builder, start_lsn);
+ }

What if the walsender or apply worker restarts after setting
two_phase_at/two_phase here and updating the two_phase state in
pg_subscription? Won't we need to set SnapBuildSetTwoPhaseAt after
restart as well? If so, we probably need a else if (ctx->twophase)
{Assert (slot->data.two_phase_at);
SnapBuildSetTwoPhaseAt(ctx->snapshot_builder,
slot->data.two_phase_at);}. Am, I missing something?

--
With Regards,
Amit Kapila.

Attachments:

v63-0001-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v63-0001-Add-support-for-apply-at-prepare-time-to-built-i.patch
v63-0002-Misc-changes-by-Amit.patchapplication/octet-stream; name=v63-0002-Misc-changes-by-Amit.patch
v63-0003-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v63-0003-Support-2PC-txn-subscriber-tests.patch
#271Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#270)

On Sat, Mar 20, 2021 at 1:35 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Mar 19, 2021 at 5:03 AM Ajin Cherian <itsajin@gmail.com> wrote:

Missed the patch - 0001, resending.

@@ -538,10 +550,21 @@ CreateDecodingContext(XLogRecPtr start_lsn,
..
+ /* Set two_phase_at LSN only if it hasn't already been set. */
+ if (ctx->twophase && !MyReplicationSlot->data.two_phase_at)
+ {
+ MyReplicationSlot->data.two_phase_at = start_lsn;
+ slot->data.two_phase = true;
+ ReplicationSlotMarkDirty();
+ ReplicationSlotSave();
+ SnapBuildSetTwoPhaseAt(ctx->snapshot_builder, start_lsn);
+ }

What if the walsender or apply worker restarts after setting
two_phase_at/two_phase here and updating the two_phase state in
pg_subscription? Won't we need to set SnapBuildSetTwoPhaseAt after
restart as well?

After a restart, two_phase_at will be set by calling
AllocateSnapshotBuilder with two_phase_at

@@ -207,7 +207,7 @@ StartupDecodingContext(List *output_plugin_options,
  ctx->reorder = ReorderBufferAllocate();
  ctx->snapshot_builder =
  AllocateSnapshotBuilder(ctx->reorder, xmin_horizon, start_lsn,
- need_full_snapshot, slot->data.initial_consistent_point);
+ need_full_snapshot, slot->data.two_phase_at);

and then in AllocateSnapshotBuilder:

@@ -309,7 +306,7 @@ AllocateSnapshotBuilder(ReorderBuffer *reorder,
  builder->initial_xmin_horizon = xmin_horizon;
  builder->start_decoding_at = start_lsn;
  builder->building_full_snapshot = need_full_snapshot;
- builder->initial_consistent_point = initial_consistent_point;
+ builder->two_phase_at = two_phase_at;

regards,
Ajin Cherian
Fujitsu Australia

#272Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Ajin Cherian (#267)
3 attachment(s)

Please find attached the latest patch set v64*

Differences from v62* are:

* Rebased to HEAD @ yesterday 19/Mar.

* Addresses the following feedback issues:

----

From Osumi-san 19/Mar -
/messages/by-id/OSBPR01MB4888930C23E17AF29EDB9D82ED689@OSBPR01MB4888.jpnprd01.prod.outlook.com

(64) Done. New tests added. Supplied patch by Osumi-san.

(65) Done. New tests added. Supplied patch by Osumi-san.

----

From Amit 16/Mar -
/messages/by-id/CAA4eK1Kwah+MimFMR3jPY5cSqpGFVh5zfV2g4=gTphaPsacoLw@mail.gmail.com

(89) Done. Added more comments explaining the AllTablesReady() implementation.

----

From Peter 17/Mar (internal)

(94) Done. Improved comment to two_phase option parsing code

----

From Amit 17/Mar -
/messages/by-id/CAA4eK1LNLA20ci3_qqNQv7BYRTy3HqiAsOfuieqo6tJ2GeYuJw@mail.gmail.com

(97) Done. Improved comment to two_phase option parsing code

----

From Amit 18/Mar -
/messages/by-id/CAA4eK1J9A_9hsxE6m_1c6CsrMsBeeaRbaLX2P16ucJrpN25-EQ@mail.gmail.com

(101) Done. Improved comment for worker.c. Apply supplied patch from
Amit. No equivalent text was put in PG docs at this time because we
are still awaiting responses on other thread [1]/messages/by-id/CALDaNm06R_ppr5ibwS1-FLDKGqUjHr-1VPdk-yJWU1TP_zLLig@mail.gmail.com that might impact
what we may want to write. Please raise a new feedback comment
if/whenn you decide PG docs should be updating.

(102) Fixed. Use different log level for subscription starting message

----

From Amit 19/Mar (internal)

(104) Done. Rename function AllTablesyncsREADY to AllTablesyncsReady

----

From Amit 19/Mar -
/messages/by-id/CAA4eK1JLz7ypPdbkPjHQW5c9vOZO5onOwb+fSLsArHQjg6dNhQ@mail.gmail.com

(105) Done. Miscellaneous fixes. Apply supplied patch from Amit.

-----
[1]: /messages/by-id/CALDaNm06R_ppr5ibwS1-FLDKGqUjHr-1VPdk-yJWU1TP_zLLig@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v64-0001-Add-support-for-apply-at-prepare-time-to-built-i.patchapplication/octet-stream; name=v64-0001-Add-support-for-apply-at-prepare-time-to-built-i.patch
v64-0003-Fix-apply-worker-dev-logs.patchapplication/octet-stream; name=v64-0003-Fix-apply-worker-dev-logs.patch
v64-0002-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v64-0002-Support-2PC-txn-subscriber-tests.patch
#273Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#271)

On Sat, Mar 20, 2021 at 7:07 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Sat, Mar 20, 2021 at 1:35 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Mar 19, 2021 at 5:03 AM Ajin Cherian <itsajin@gmail.com> wrote:

Missed the patch - 0001, resending.

@@ -538,10 +550,21 @@ CreateDecodingContext(XLogRecPtr start_lsn,
..
+ /* Set two_phase_at LSN only if it hasn't already been set. */
+ if (ctx->twophase && !MyReplicationSlot->data.two_phase_at)
+ {
+ MyReplicationSlot->data.two_phase_at = start_lsn;
+ slot->data.two_phase = true;
+ ReplicationSlotMarkDirty();
+ ReplicationSlotSave();
+ SnapBuildSetTwoPhaseAt(ctx->snapshot_builder, start_lsn);
+ }

What if the walsender or apply worker restarts after setting
two_phase_at/two_phase here and updating the two_phase state in
pg_subscription? Won't we need to set SnapBuildSetTwoPhaseAt after
restart as well?

After a restart, two_phase_at will be set by calling AllocateSnapshotBuilder with two_phase_at

Okay, that makes sense.

--
With Regards,
Amit Kapila.

#274Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#270)

On Sat, Mar 20, 2021 at 1:35 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Mar 19, 2021 at 5:03 AM Ajin Cherian <itsajin@gmail.com> wrote:

Missed the patch - 0001, resending.

I have made miscellaneous changes in the patch which includes
improving comments, error messages, and miscellaneous coding
improvements. The most notable one is that we don't need an additional
parameter in walrcv_startstreaming, if the two_phase option is set
properly. My changes are in v63-0002-Misc-changes-by-Amit, if you are
fine with those, then please merge them in the next version. I have
omitted the dev-logs patch but feel free to submit it. I have one
question:

I am fine with these changes. I see that Peter has already merged in these
changes.

thanks,
Ajin Cherian
Fujitsu Australia

#275Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#274)
3 attachment(s)

On Sat, Mar 20, 2021 at 10:09 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Sat, Mar 20, 2021 at 1:35 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Mar 19, 2021 at 5:03 AM Ajin Cherian <itsajin@gmail.com> wrote:

Missed the patch - 0001, resending.

I have made miscellaneous changes in the patch which includes
improving comments, error messages, and miscellaneous coding
improvements. The most notable one is that we don't need an additional
parameter in walrcv_startstreaming, if the two_phase option is set
properly. My changes are in v63-0002-Misc-changes-by-Amit, if you are
fine with those, then please merge them in the next version. I have
omitted the dev-logs patch but feel free to submit it. I have one
question:

I am fine with these changes. I see that Peter has already merged in these changes.

I have further updated the patch to implement unique GID on the
subscriber-side as discussed in the nearby thread [1]/messages/by-id/CAA4eK1+opiV4aFTmWWUF9h_32=HfPOW9vZASHarT0UA5oBrtGw@mail.gmail.com. That requires
some changes in the test. Additionally, I have updated some comments
and docs. Let me know what do you think about the changes?

[1]: /messages/by-id/CAA4eK1+opiV4aFTmWWUF9h_32=HfPOW9vZASHarT0UA5oBrtGw@mail.gmail.com

--
With Regards,
Amit Kapila.

Attachments:

v65-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v65-0001-Add-support-for-prepared-transactions-to-built-i.patch
v65-0002-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v65-0002-Support-2PC-txn-subscriber-tests.patch
v65-0003-Fix-apply-worker-dev-logs.patchapplication/octet-stream; name=v65-0003-Fix-apply-worker-dev-logs.patch
#276osumi.takamichi@fujitsu.com
osumi.takamichi@fujitsu.com
osumi.takamichi@fujitsu.com
In reply to: Amit Kapila (#275)
1 attachment(s)
RE: [HACKERS] logical decoding of two-phase transactions

Hello

On Sunday, March 21, 2021 4:37 PM Amit Kapila <amit.kapila16@gmail.com>

On Sat, Mar 20, 2021 at 10:09 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Sat, Mar 20, 2021 at 1:35 AM Amit Kapila <amit.kapila16@gmail.com>

wrote:

On Fri, Mar 19, 2021 at 5:03 AM Ajin Cherian <itsajin@gmail.com> wrote:

Missed the patch - 0001, resending.

I have made miscellaneous changes in the patch which includes
improving comments, error messages, and miscellaneous coding
improvements. The most notable one is that we don't need an
additional parameter in walrcv_startstreaming, if the two_phase
option is set properly. My changes are in
v63-0002-Misc-changes-by-Amit, if you are fine with those, then
please merge them in the next version. I have omitted the dev-logs
patch but feel free to submit it. I have one
question:

I am fine with these changes. I see that Peter has already merged in these

changes.

I have further updated the patch to implement unique GID on the
subscriber-side as discussed in the nearby thread [1]. That requires some
changes in the test.

Thank you for your update. v65 didn't make any failure during make check-world.

I've written additional tests for alter subscription using refresh
for enabled subscription and two_phase = on.
I wrote those as TAP tests because refresh requires enabled subscription
and to get a subscription enabled, we need to set connect true as well.

TAP tests are for having connection between sub and pub,
and tests in subscription.sql are aligned with connect=false.

Just in case, I ran 020_twophase.pl with this patch 100 times, based on v65 as well
and didn't cause any failure. Please have a look at the attached patch.

Best Regards,
Takamichi Osumi

Attachments:

0001-additional-tests-for-ALTER-SUBSCRIPTION.patchapplication/octet-stream; name=0001-additional-tests-for-ALTER-SUBSCRIPTION.patch
#277Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#275)
4 attachment(s)

On Sun, Mar 21, 2021 at 6:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have further updated the patch to implement unique GID on the
subscriber-side as discussed in the nearby thread [1]. That requires
some changes in the test. Additionally, I have updated some comments
and docs. Let me know what do you think about the changes?

Hi Amit.

PSA a small collection of feedback patches you can apply on top of the
patch v65-0001 if you decide they are OK.

(There are all I have found after a first pass over all the recent changes).

------
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v65-0001-Feedback-parse_subscription_options-parens-not-n.patchapplication/octet-stream; name=v65-0001-Feedback-parse_subscription_options-parens-not-n.patch
v65-0002-Feedback-apply_handle_prepare-comment-typo.patchapplication/octet-stream; name=v65-0002-Feedback-apply_handle_prepare-comment-typo.patch
v65-0003-Feedback-apply_handle_begin_prepare-ineffective-.patchapplication/octet-stream; name=v65-0003-Feedback-apply_handle_begin_prepare-ineffective-.patch
v65-0004-Feedback-AllTablesyncsReady-function-simplified.patchapplication/octet-stream; name=v65-0004-Feedback-AllTablesyncsReady-function-simplified.patch
#278Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#277)
2 attachment(s)

On Mon, Mar 22, 2021 at 6:27 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Sun, Mar 21, 2021 at 6:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have further updated the patch to implement unique GID on the
subscriber-side as discussed in the nearby thread [1]. That requires
some changes in the test. Additionally, I have updated some comments
and docs. Let me know what do you think about the changes?

Hi Amit.

PSA a small collection of feedback patches you can apply on top of the
patch v65-0001 if you decide they are OK.

(There are all I have found after a first pass over all the recent changes).

I have spell-checked the content of v65-0001.

PSA a couple more feedback patches to apply on top of v65-0001 if you
decide they are ok.

----
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v65-0006-Feedback-worker.c-comment-wording.patchapplication/octet-stream; name=v65-0006-Feedback-worker.c-comment-wording.patch
v65-0005-Feedback-create_subscription-docs-typo.patchapplication/octet-stream; name=v65-0005-Feedback-create_subscription-docs-typo.patch
#279tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
In reply to: Amit Kapila (#275)
RE: [HACKERS] logical decoding of two-phase transactions

On Sunday, March 21, 2021 4:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have further updated the patch to implement unique GID on the
subscriber-side as discussed in the nearby thread [1].

I did some tests(cross version & synchronous) on the latest patch set v65*, all tests passed. Here is the detail, please take it as a reference.

Case | version of publisher | version of subscriber | two_phase option | synchronous | expect result | result
-------+------------------------+-------------------------+----------------------+---------------+-----------------+---------
1 | 13 | 14(patched) | on | no | same as case3 | ok
2 | 13 | 14(patched) | off | no | same as case3 | ok
3 | 13 | 14(unpatched) | not support | no | - | -
4 | 14(patched) | 13 | not support | no | same as case5 | ok
5 | 14(unpatched) | 13 | not support | no | - | -
6 | 13 | 14(patched) | on | yes | same as case8 | ok
7 | 13 | 14(patched) | off | yes | same as case8 | ok
8 | 13 | 14(unpatched) | not support | yes | - | -
9 | 14(patched) | 13 | not support | yes | same as case10 | ok
10 | 14(unpatched) | 13 | not support | yes | - | -

remark:
(1)case3, 5 ,8, 10 is tested just for reference
(2)SQL been executed in each case
scenario1 begin…commit
scenario2 begin…prepare…commit

Regards,
Tang

#280Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#278)
2 attachment(s)

On Mon, Mar 22, 2021 at 2:41 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Mon, Mar 22, 2021 at 6:27 PM Peter Smith <smithpb2250@gmail.com> wrote:

Hi Amit.

PSA a small collection of feedback patches you can apply on top of the
patch v65-0001 if you decide they are OK.

(There are all I have found after a first pass over all the recent changes).

I have spell-checked the content of v65-0001.

PSA a couple more feedback patches to apply on top of v65-0001 if you
decide they are ok.

I have incorporated all your changes and additionally made few more
changes (a) got rid of LogicalRepBeginPrepareData and instead used
LogicalRepPreparedTxnData, (b) made a number of changes in comments
and docs, (c) ran pgindent, (d) modified tests to use standard
wait_for_catch function and removed few tests to reduce the time and
to keep regression tests reliable.

--
With Regards,
Amit Kapila.

Attachments:

v66-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v66-0001-Add-support-for-prepared-transactions-to-built-i.patch
v66-0002-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v66-0002-Support-2PC-txn-subscriber-tests.patch
#281Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#280)
2 attachment(s)

On Mon, Mar 22, 2021 at 11:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have incorporated all your changes and additionally made few more
changes (a) got rid of LogicalRepBeginPrepareData and instead used
LogicalRepPreparedTxnData, (b) made a number of changes in comments
and docs, (c) ran pgindent, (d) modified tests to use standard
wait_for_catch function and removed few tests to reduce the time and
to keep regression tests reliable.

I checked all v65* / v66* differences and found only two trivial comment typos.

PSA patches to fix those.

----
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v66-0001-Feedback-apply_handle_rollback_prepared-typo-in-.patchapplication/octet-stream; name=v66-0001-Feedback-apply_handle_rollback_prepared-typo-in-.patch
v66-0002-Feedback-AlterSubscription-typo-in-comment.patchapplication/octet-stream; name=v66-0002-Feedback-AlterSubscription-typo-in-comment.patch
#282Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#281)
1 attachment(s)

On Tue, Mar 23, 2021 at 10:44 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Mon, Mar 22, 2021 at 11:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have incorporated all your changes and additionally made few more
changes (a) got rid of LogicalRepBeginPrepareData and instead used
LogicalRepPreparedTxnData, (b) made a number of changes in comments
and docs, (c) ran pgindent, (d) modified tests to use standard
wait_for_catch function and removed few tests to reduce the time and
to keep regression tests reliable.

I checked all v65* / v66* differences and found only two trivial comment typos.

PSA patches to fix those.

Hi Amit.

PSA a patch to allow the ALTER SUBSCRIPTION ... REFRESH PUBLICATION to
work when two-phase tristate is PENDING.

This is necessary for the pg_dump/pg_restore scenario, or for any
other use-case where the subscription might
start off having no tables.

Please apply this on top of your v66-0001 (after applying the other
Feedback patches I posted earlier today).

------
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v66-0003-Fix-to-allow-REFRESH-PUBLICATION-for-two_phase-P.patchapplication/octet-stream; name=v66-0003-Fix-to-allow-REFRESH-PUBLICATION-for-two_phase-P.patch
#283Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Peter Smith (#282)
1 attachment(s)

On Tue, Mar 23, 2021 at 9:01 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please apply this on top of your v66-0001 (after applying the other
Feedback patches I posted earlier today).

Applied all the above patches and did a 5 cascade test set up and all the
instances synced correctly. Test log attached.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

5 cascade setupapplication/octet-stream; name="5 cascade setup"
#284Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Ajin Cherian (#283)
1 attachment(s)

On Tue, Mar 23, 2021 at 9:49 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Tue, Mar 23, 2021 at 9:01 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please apply this on top of your v66-0001 (after applying the other
Feedback patches I posted earlier today).

Applied all the above patches and did a 5 cascade test set up and all the instances synced correctly. Test log attached.

FYI - Using the same v66* patch set (including yesterday's additional
patches) I have run the subscription TAP tests 020 and 021 in a loop x
150.

All passed ok. PSA the results file as evidence.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

results_020_021_x150.txttext/plain; charset=US-ASCII; name=results_020_021_x150.txt
#285Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#282)
1 attachment(s)

On Tue, Mar 23, 2021 at 9:01 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Tue, Mar 23, 2021 at 10:44 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Mon, Mar 22, 2021 at 11:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have incorporated all your changes and additionally made few more
changes (a) got rid of LogicalRepBeginPrepareData and instead used
LogicalRepPreparedTxnData, (b) made a number of changes in comments
and docs, (c) ran pgindent, (d) modified tests to use standard
wait_for_catch function and removed few tests to reduce the time and
to keep regression tests reliable.

I checked all v65* / v66* differences and found only two trivial comment typos.

PSA patches to fix those.

Hi Amit.

PSA a patch to allow the ALTER SUBSCRIPTION ... REFRESH PUBLICATION to
work when two-phase tristate is PENDING.

This is necessary for the pg_dump/pg_restore scenario, or for any
other use-case where the subscription might
start off having no tables.

Please apply this on top of your v66-0001 (after applying the other
Feedback patches I posted earlier today).

PSA a small addition to the 66-0003 "Fix to allow REFRESH PUBLICATION"
patch posted yesterday.

This just updates the worker.c comment.

------
Kind Regards,
Peter Smith.
Fujitsu Australia.

Attachments:

v66-0004-Updated-worker.c-comment.patchapplication/octet-stream; name=v66-0004-Updated-worker.c-comment.patch
#286houzj.fnst@fujitsu.com
houzj.fnst@fujitsu.com
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#280)
RE: [HACKERS] logical decoding of two-phase transactions

I have incorporated all your changes and additionally made few more changes
(a) got rid of LogicalRepBeginPrepareData and instead used
LogicalRepPreparedTxnData, (b) made a number of changes in comments and
docs, (c) ran pgindent, (d) modified tests to use standard wait_for_catch
function and removed few tests to reduce the time and to keep regression
tests reliable.

Hi,

When reading the code, I found some comments related to the patch here.

* XXX Now, this can even lead to a deadlock if the prepare
* transaction is waiting to get it logically replicated for
* distributed 2PC. Currently, we don't have an in-core
* implementation of prepares for distributed 2PC but some
* out-of-core logical replication solution can have such an
* implementation. They need to inform users to not have locks
* on catalog tables in such transactions.
*/

Since we will have in-core implementation of prepares, should we update the comments here ?

Best regards,
houzj

#287Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#282)

On Tue, Mar 23, 2021 at 3:31 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Tue, Mar 23, 2021 at 10:44 AM Peter Smith <smithpb2250@gmail.com> wrote:

PSA patches to fix those.

Hi Amit.

PSA a patch to allow the ALTER SUBSCRIPTION ... REFRESH PUBLICATION to
work when two-phase tristate is PENDING.

This is necessary for the pg_dump/pg_restore scenario, or for any
other use-case where the subscription might
start off having no tables.

+ subrels = GetSubscriptionRelations(MySubscription->oid);
+
+ /*
+ * If there are no tables then leave the state as PENDING, which
+ * allows ALTER SUBSCRIPTION ... REFRESH PUBLICATION to work.
+ */
+ become_two_phase_enabled = list_length(subrels) > 0;

This code is similar at both the places it is used. Isn't it better to
move this inside AllTablesyncsReady and if required then we can change
the name of the function.

--
With Regards,
Amit Kapila.

#288Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#287)
1 attachment(s)

On Wed, Mar 24, 2021 at 11:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Mar 23, 2021 at 3:31 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Tue, Mar 23, 2021 at 10:44 AM Peter Smith <smithpb2250@gmail.com> wrote:

PSA patches to fix those.

Hi Amit.

PSA a patch to allow the ALTER SUBSCRIPTION ... REFRESH PUBLICATION to
work when two-phase tristate is PENDING.

This is necessary for the pg_dump/pg_restore scenario, or for any
other use-case where the subscription might
start off having no tables.

+ subrels = GetSubscriptionRelations(MySubscription->oid);
+
+ /*
+ * If there are no tables then leave the state as PENDING, which
+ * allows ALTER SUBSCRIPTION ... REFRESH PUBLICATION to work.
+ */
+ become_two_phase_enabled = list_length(subrels) > 0;

This code is similar at both the places it is used. Isn't it better to
move this inside AllTablesyncsReady and if required then we can change
the name of the function.

I agree. That way is better.

PSA a patch which changes the AllTableSyncsReady function to now
include the zero tables check.

(This patch is to be applied on top of all previous patches)

------
Kind Regards,
Peter Smith.
Fujitsu Australia.

Attachments:

v66-0005-Change-AllTablesyncsReady-to-return-false-when-0.patchapplication/octet-stream; name=v66-0005-Change-AllTablesyncsReady-to-return-false-when-0.patch
#289Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#288)
1 attachment(s)

On Thu, Mar 25, 2021 at 1:40 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Wed, Mar 24, 2021 at 11:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Mar 23, 2021 at 3:31 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Tue, Mar 23, 2021 at 10:44 AM Peter Smith <smithpb2250@gmail.com> wrote:

PSA patches to fix those.

Hi Amit.

PSA a patch to allow the ALTER SUBSCRIPTION ... REFRESH PUBLICATION to
work when two-phase tristate is PENDING.

This is necessary for the pg_dump/pg_restore scenario, or for any
other use-case where the subscription might
start off having no tables.

+ subrels = GetSubscriptionRelations(MySubscription->oid);
+
+ /*
+ * If there are no tables then leave the state as PENDING, which
+ * allows ALTER SUBSCRIPTION ... REFRESH PUBLICATION to work.
+ */
+ become_two_phase_enabled = list_length(subrels) > 0;

This code is similar at both the places it is used. Isn't it better to
move this inside AllTablesyncsReady and if required then we can change
the name of the function.

I agree. That way is better.

PSA a patch which changes the AllTableSyncsReady function to now
include the zero tables check.

(This patch is to be applied on top of all previous patches)

------

PSA a patch which modifies the FetchTableStates function to use a more
efficient way of testing if the subscription has any tables or not.

(This patch is to be applied on top of all previous v66* patches posted)

------
Kind Regards,
Peter Smith.
Fujitsu Australia.

Attachments:

v66-0006-FetchTableStates-performance-improvements.patchapplication/octet-stream; name=v66-0006-FetchTableStates-performance-improvements.patch
#290Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#289)
2 attachment(s)

On Thu, Mar 25, 2021 at 12:39 PM Peter Smith <smithpb2250@gmail.com> wrote:

PSA a patch which modifies the FetchTableStates function to use a more
efficient way of testing if the subscription has any tables or not.

(This patch is to be applied on top of all previous v66* patches posted)

I have incorporated all your incremental patches and fixed comments
raised by Hou-San in the attached patch.

--
With Regards,
Amit Kapila.

Attachments:

v67-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v67-0001-Add-support-for-prepared-transactions-to-built-i.patch
v67-0002-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v67-0002-Support-2PC-txn-subscriber-tests.patch
#291Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: houzj.fnst@fujitsu.com (#286)

On Wed, Mar 24, 2021 at 3:59 PM houzj.fnst@fujitsu.com
<houzj.fnst@fujitsu.com> wrote:

I have incorporated all your changes and additionally made few more changes
(a) got rid of LogicalRepBeginPrepareData and instead used
LogicalRepPreparedTxnData, (b) made a number of changes in comments and
docs, (c) ran pgindent, (d) modified tests to use standard wait_for_catch
function and removed few tests to reduce the time and to keep regression
tests reliable.

Hi,

When reading the code, I found some comments related to the patch here.

* XXX Now, this can even lead to a deadlock if the prepare
* transaction is waiting to get it logically replicated for
* distributed 2PC. Currently, we don't have an in-core
* implementation of prepares for distributed 2PC but some
* out-of-core logical replication solution can have such an
* implementation. They need to inform users to not have locks
* on catalog tables in such transactions.
*/

Since we will have in-core implementation of prepares, should we update the comments here ?

Fixed this in the latest patch posted by me. I have additionally
updated the docs to reflect the same.

--
With Regards,
Amit Kapila.

#292vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#275)
1 attachment(s)

On Sun, Mar 21, 2021 at 1:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Sat, Mar 20, 2021 at 10:09 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Sat, Mar 20, 2021 at 1:35 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Mar 19, 2021 at 5:03 AM Ajin Cherian <itsajin@gmail.com> wrote:

Missed the patch - 0001, resending.

I have made miscellaneous changes in the patch which includes
improving comments, error messages, and miscellaneous coding
improvements. The most notable one is that we don't need an additional
parameter in walrcv_startstreaming, if the two_phase option is set
properly. My changes are in v63-0002-Misc-changes-by-Amit, if you are
fine with those, then please merge them in the next version. I have
omitted the dev-logs patch but feel free to submit it. I have one
question:

I am fine with these changes. I see that Peter has already merged in these changes.

I have further updated the patch to implement unique GID on the
subscriber-side as discussed in the nearby thread [1]. That requires
some changes in the test. Additionally, I have updated some comments
and docs. Let me know what do you think about the changes?

+static void
+TwoPhaseTransactionGid(RepOriginId originid, TransactionId xid,
+                                          char *gid, int szgid)
+{
+       /* Origin and Transaction ids must be valid */
+       Assert(originid != InvalidRepOriginId);
+       Assert(TransactionIdIsValid(xid));
+
+       snprintf(gid, szgid, "pg_%u_%u", originid, xid);
+}

I found one issue in the current mechanism that we use to generate the
GID's. In one of the scenarios it will generate the same GID's, steps
for the same is given below:
---- setup 2 publisher and one subscriber with synchronous_standby_names
prepare txn 't1' on publisher1 (This prepared txn is prepared as
pg_1_542 on subscriber)
drop subscription of publisher1
create subscription subscriber for publisher2 (We have changed the
subscription to subscribe to publisher2 which was earlier subscribing
to publisher1)
prepare txn 't2' on publisher2 (This prepared txn also uses pg_1_542
on subscriber even though user has given a different gid)

This prepared txn keeps waiting for it to complete in the subscriber,
but never completes. Here user uses different gid for prepared
transaction but it ends up using the same gid at the subscriber. The
subscriber keeps failing with:
2021-03-22 10:14:57.859 IST [73959] ERROR: transaction identifier
"pg_1_542" is already in use
2021-03-22 10:14:57.860 IST [73868] LOG: background worker "logical
replication worker" (PID 73959) exited with exit code 1

Attached file has the steps for it.
This might be a rare scenario, may or may not be a user scenario,
Should we handle this scenario?

Regards,
Vignesh

Attachments:

possible_bug.shapplication/x-shellscript; name=possible_bug.sh
#293Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#292)
2 attachment(s)

Please find attached the latest patch set v68*

Differences from v67* are:

* Rebased to HEAD @ today.

* v68 fixes an issue reported by Vignesh [1]/messages/by-id/CALDaNm2ZnJeG23bE+gEOQEmXo8N+fs2g4=xuH2u6nNcX0s9Jjg@mail.gmail.com where a scenario was
found which still was able to cause a generated GID clash. Using
Vignesh's test script I could reproduce the problem exactly as
described. The fix makes the GID unique by including the subid. Now
the same script runs to normal completion and produces good/expected
output:

transaction | gid | prepared |
owner | database
-------------+------------------+-------------------------------+----------+----------
547 | pg_gid_16389_543 | 2021-03-30 10:32:36.87207+11 |
postgres | postgres
555 | pg_gid_16390_543 | 2021-03-30 10:32:48.087771+11 |
postgres | postgres
(2 rows)

----
[1]: /messages/by-id/CALDaNm2ZnJeG23bE+gEOQEmXo8N+fs2g4=xuH2u6nNcX0s9Jjg@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v68-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v68-0001-Add-support-for-prepared-transactions-to-built-i.patch
v68-0002-Support-2PC-txn-subscriber-tests.patchapplication/octet-stream; name=v68-0002-Support-2PC-txn-subscriber-tests.patch
#294vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#293)

On Tue, Mar 30, 2021 at 5:34 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v68*

Differences from v67* are:

* Rebased to HEAD @ today.

* v68 fixes an issue reported by Vignesh [1] where a scenario was
found which still was able to cause a generated GID clash. Using
Vignesh's test script I could reproduce the problem exactly as
described. The fix makes the GID unique by including the subid. Now
the same script runs to normal completion and produces good/expected
output:

transaction | gid | prepared |
owner | database
-------------+------------------+-------------------------------+----------+----------
547 | pg_gid_16389_543 | 2021-03-30 10:32:36.87207+11 |
postgres | postgres
555 | pg_gid_16390_543 | 2021-03-30 10:32:48.087771+11 |
postgres | postgres
(2 rows)

Thanks for the patch with the fix, the fix solves the issue reported.

Regards,
Vignesh

#295Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#293)

On Tue, Mar 30, 2021 at 5:34 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v68*

I think this patch is in much better shape than it was few versions
earlier but I feel still some more work and testing is required. We
can try to make it work with the streaming option and do something
about empty prepare transactions to reduce the need for users to set a
much higher value for max_prepared_xacts on subscribers. So, I propose
to move it to the next CF, what do you think?

--
With Regards,
Amit Kapila.

#296Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#295)

On Thu, Apr 1, 2021 at 2:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Mar 30, 2021 at 5:34 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v68*

I think this patch is in much better shape than it was few versions
earlier but I feel still some more work and testing is required. We
can try to make it work with the streaming option and do something
about empty prepare transactions to reduce the need for users to set a
much higher value for max_prepared_xacts on subscribers. So, I propose
to move it to the next CF, what do you think?

I agree.

regards,
Ajin Cherian
Fujitsu Australia

#297vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#295)

On Thu, Apr 1, 2021 at 8:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Mar 30, 2021 at 5:34 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v68*

I think this patch is in much better shape than it was few versions
earlier but I feel still some more work and testing is required. We
can try to make it work with the streaming option and do something
about empty prepare transactions to reduce the need for users to set a
much higher value for max_prepared_xacts on subscribers. So, I propose
to move it to the next CF, what do you think?

+1 for moving it to the next PG version.

Regards,
Vignesh

#298Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Ajin Cherian (#296)

On Thu, Apr 1, 2021 at 4:58 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Thu, Apr 1, 2021 at 2:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Mar 30, 2021 at 5:34 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v68*

I think this patch is in much better shape than it was few versions
earlier but I feel still some more work and testing is required. We
can try to make it work with the streaming option and do something
about empty prepare transactions to reduce the need for users to set a
much higher value for max_prepared_xacts on subscribers. So, I propose
to move it to the next CF, what do you think?

I agree.

OK, done. Moved to next CF here: https://commitfest.postgresql.org/33/2914/

------
Kind Regards,
Peter Smith.
Fujitsu Australia.

#299Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#293)
1 attachment(s)

Please find attached the latest patch set v69*

Differences from v68* are:

* Rebased to HEAD @ yesterday.
There was some impacts caused by recently pushed patches [1]https://github.com/postgres/postgres/commit/531737ddad214cb8a675953208e2f3a6b1be122b [2]https://github.com/postgres/postgres/commit/ac4645c0157fc5fcef0af8ff571512aa284a2cec

* The stream/prepare functionality and tests have been restored to be
the same as they were in v48 [3]/messages/by-id/CAHut+Psr8f1tUttndgnkK_=a7w=hsomw16SEOn6U68jSBKL9SQ@mail.gmail.com.
Previously, this code had been removed back in v49 [4]/messages/by-id/CAFPTHDZduc2fDzqd_L4vPmA2R+-e8nEbau9HseHHi82w=p-uvQ@mail.gmail.com due to
incompatibilities with the (now obsolete) psf design.

* TAP tests are now co-located in the same patch as the code they are testing.

----
[1]: https://github.com/postgres/postgres/commit/531737ddad214cb8a675953208e2f3a6b1be122b
[2]: https://github.com/postgres/postgres/commit/ac4645c0157fc5fcef0af8ff571512aa284a2cec
[3]: /messages/by-id/CAHut+Psr8f1tUttndgnkK_=a7w=hsomw16SEOn6U68jSBKL9SQ@mail.gmail.com
[4]: /messages/by-id/CAFPTHDZduc2fDzqd_L4vPmA2R+-e8nEbau9HseHHi82w=p-uvQ@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Show quoted text

On Tue, Mar 30, 2021 at 11:03 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v68*

Differences from v67* are:

* Rebased to HEAD @ today.

* v68 fixes an issue reported by Vignesh [1] where a scenario was
found which still was able to cause a generated GID clash. Using
Vignesh's test script I could reproduce the problem exactly as
described. The fix makes the GID unique by including the subid. Now
the same script runs to normal completion and produces good/expected
output:

transaction | gid | prepared |
owner | database
-------------+------------------+-------------------------------+----------+----------
547 | pg_gid_16389_543 | 2021-03-30 10:32:36.87207+11 |
postgres | postgres
555 | pg_gid_16390_543 | 2021-03-30 10:32:48.087771+11 |
postgres | postgres
(2 rows)

----
[1] /messages/by-id/CALDaNm2ZnJeG23bE+gEOQEmXo8N+fs2g4=xuH2u6nNcX0s9Jjg@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v69-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v69-0001-Add-support-for-prepared-transactions-to-built-i.patch
#300Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#299)
1 attachment(s)

Please find attached the latest patch set v70*

Differences from v69* are:

* Rebased to HEAD @ today
Unfortunately, the v69 patch was broken due to a recent push [1]https://github.com/postgres/postgres/commit/82ed7748b710e3ddce3f7ebc74af80fe4869492f

----
[1]: https://github.com/postgres/postgres/commit/82ed7748b710e3ddce3f7ebc74af80fe4869492f

Kind Regards,
Peter Smith.
Fujitsu Australia

Show quoted text

On Wed, Apr 7, 2021 at 10:25 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v69*

Differences from v68* are:

* Rebased to HEAD @ yesterday.
There was some impacts caused by recently pushed patches [1] [2]

* The stream/prepare functionality and tests have been restored to be
the same as they were in v48 [3].
Previously, this code had been removed back in v49 [4] due to
incompatibilities with the (now obsolete) psf design.

* TAP tests are now co-located in the same patch as the code they are testing.

----
[1] https://github.com/postgres/postgres/commit/531737ddad214cb8a675953208e2f3a6b1be122b
[2] https://github.com/postgres/postgres/commit/ac4645c0157fc5fcef0af8ff571512aa284a2cec
[3] /messages/by-id/CAHut+Psr8f1tUttndgnkK_=a7w=hsomw16SEOn6U68jSBKL9SQ@mail.gmail.com
[4] /messages/by-id/CAFPTHDZduc2fDzqd_L4vPmA2R+-e8nEbau9HseHHi82w=p-uvQ@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

On Tue, Mar 30, 2021 at 11:03 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v68*

Differences from v67* are:

* Rebased to HEAD @ today.

* v68 fixes an issue reported by Vignesh [1] where a scenario was
found which still was able to cause a generated GID clash. Using
Vignesh's test script I could reproduce the problem exactly as
described. The fix makes the GID unique by including the subid. Now
the same script runs to normal completion and produces good/expected
output:

transaction | gid | prepared |
owner | database
-------------+------------------+-------------------------------+----------+----------
547 | pg_gid_16389_543 | 2021-03-30 10:32:36.87207+11 |
postgres | postgres
555 | pg_gid_16390_543 | 2021-03-30 10:32:48.087771+11 |
postgres | postgres
(2 rows)

----
[1] /messages/by-id/CALDaNm2ZnJeG23bE+gEOQEmXo8N+fs2g4=xuH2u6nNcX0s9Jjg@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v70-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v70-0001-Add-support-for-prepared-transactions-to-built-i.patch
#301Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#300)
2 attachment(s)

Please find attached the latest patch set v71*

Differences from v70* are:

* Rebased to HEAD @ yesterday.

* Functionality of v71 is identical to v70, but the patch has been
split into two parts
0001 - 2PC core patch
0002 - adds 2PC support for "streaming" transactions

----
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v71-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v71-0001-Add-support-for-prepared-transactions-to-built-i.patch
v71-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v71-0002-Add-prepare-API-support-for-streaming-transactio.patch
#302Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#150)

On Mon, Dec 14, 2020 at 8:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

2.
+ /*
+ * Flags are determined from the state of the transaction. We know we
+ * always get PREPARE first and then [COMMIT|ROLLBACK] PREPARED, so if
+ * it's already marked as committed then it has to be COMMIT PREPARED (and
+ * likewise for abort / ROLLBACK PREPARED).
+ */
+ if (rbtxn_commit_prepared(txn))
+ flags = LOGICALREP_IS_COMMIT_PREPARED;
+ else if (rbtxn_rollback_prepared(txn))
+ flags = LOGICALREP_IS_ROLLBACK_PREPARED;
+ else
+ flags = LOGICALREP_IS_PREPARE;

I don't like clubbing three different operations under one message
LOGICAL_REP_MSG_PREPARE. It looks awkward to use new flags
RBTXN_COMMIT_PREPARED and RBTXN_ROLLBACK_PREPARED in ReordeBuffer so
that we can recognize these operations in corresponding callbacks. I
think setting any flag in ReorderBuffer should not dictate the
behavior in callbacks. Then also there are few things that are not
common to those APIs like the patch has an Assert to say that the txn
is marked with prepare flag for all three operations which I think is
not true for Rollback Prepared after the restart. We don't ensure to
set the Prepare flag if the Rollback Prepare happens after the
restart. Then, we have to introduce separate flags to distinguish
prepare/commit prepared/rollback prepared to distinguish multiple
operations sent as protocol messages. Also, all these operations are
mutually exclusive so it will be better to send separate messages for
each of these and I have changed it accordingly in the attached patch.

While looking at the two-phase protocol messages (with a view to
documenting them) I noticed that the messages for
LOGICAL_REP_MSG_PREPARE, LOGICAL_REP_MSG_COMMIT_PREPARED,
LOGICAL_REP_MSG_ROLLBACK_PREPARED are all sending and receiving flag
bytes which *always* has a value 0.

----------
e.g.
uint8 flags = 0;
pq_sendbyte(out, flags);

and
/* read flags */
uint8 flags = pq_getmsgbyte(in);
if (flags != 0)
elog(ERROR, "unrecognized flags %u in commit prepare message", flags);
----------

I think this patch version v31 is where the flags became redundant.

Is there some reason why these unused flags still remain in the protocol code?

Do you have any objection to me removing them?
Otherwise, it might seem strange to document a flag that has no function.

------
KInd Regards,
Peter Smith.
Fujitsu Australia

#303Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#302)

On Fri, Apr 9, 2021 at 12:33 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Mon, Dec 14, 2020 at 8:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

2.
+ /*
+ * Flags are determined from the state of the transaction. We know we
+ * always get PREPARE first and then [COMMIT|ROLLBACK] PREPARED, so if
+ * it's already marked as committed then it has to be COMMIT PREPARED (and
+ * likewise for abort / ROLLBACK PREPARED).
+ */
+ if (rbtxn_commit_prepared(txn))
+ flags = LOGICALREP_IS_COMMIT_PREPARED;
+ else if (rbtxn_rollback_prepared(txn))
+ flags = LOGICALREP_IS_ROLLBACK_PREPARED;
+ else
+ flags = LOGICALREP_IS_PREPARE;

I don't like clubbing three different operations under one message
LOGICAL_REP_MSG_PREPARE. It looks awkward to use new flags
RBTXN_COMMIT_PREPARED and RBTXN_ROLLBACK_PREPARED in ReordeBuffer so
that we can recognize these operations in corresponding callbacks. I
think setting any flag in ReorderBuffer should not dictate the
behavior in callbacks. Then also there are few things that are not
common to those APIs like the patch has an Assert to say that the txn
is marked with prepare flag for all three operations which I think is
not true for Rollback Prepared after the restart. We don't ensure to
set the Prepare flag if the Rollback Prepare happens after the
restart. Then, we have to introduce separate flags to distinguish
prepare/commit prepared/rollback prepared to distinguish multiple
operations sent as protocol messages. Also, all these operations are
mutually exclusive so it will be better to send separate messages for
each of these and I have changed it accordingly in the attached patch.

While looking at the two-phase protocol messages (with a view to
documenting them) I noticed that the messages for
LOGICAL_REP_MSG_PREPARE, LOGICAL_REP_MSG_COMMIT_PREPARED,
LOGICAL_REP_MSG_ROLLBACK_PREPARED are all sending and receiving flag
bytes which *always* has a value 0.

----------
e.g.
uint8 flags = 0;
pq_sendbyte(out, flags);

and
/* read flags */
uint8 flags = pq_getmsgbyte(in);
if (flags != 0)
elog(ERROR, "unrecognized flags %u in commit prepare message", flags);
----------

I think this patch version v31 is where the flags became redundant.

I think this has been kept for future use similar to how we have in
logicalrep_write_commit. So, I think we can keep them unused for now.
We can document it similar commit message ('C') [1]https://www.postgresql.org/docs/devel/protocol-logicalrep-message-formats.html.

[1]: https://www.postgresql.org/docs/devel/protocol-logicalrep-message-formats.html

--
With Regards,
Amit Kapila.

#304Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#303)

On Fri, Apr 9, 2021 at 6:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Apr 9, 2021 at 12:33 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Mon, Dec 14, 2020 at 8:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

2.
+ /*
+ * Flags are determined from the state of the transaction. We know we
+ * always get PREPARE first and then [COMMIT|ROLLBACK] PREPARED, so if
+ * it's already marked as committed then it has to be COMMIT PREPARED (and
+ * likewise for abort / ROLLBACK PREPARED).
+ */
+ if (rbtxn_commit_prepared(txn))
+ flags = LOGICALREP_IS_COMMIT_PREPARED;
+ else if (rbtxn_rollback_prepared(txn))
+ flags = LOGICALREP_IS_ROLLBACK_PREPARED;
+ else
+ flags = LOGICALREP_IS_PREPARE;

I don't like clubbing three different operations under one message
LOGICAL_REP_MSG_PREPARE. It looks awkward to use new flags
RBTXN_COMMIT_PREPARED and RBTXN_ROLLBACK_PREPARED in ReordeBuffer so
that we can recognize these operations in corresponding callbacks. I
think setting any flag in ReorderBuffer should not dictate the
behavior in callbacks. Then also there are few things that are not
common to those APIs like the patch has an Assert to say that the txn
is marked with prepare flag for all three operations which I think is
not true for Rollback Prepared after the restart. We don't ensure to
set the Prepare flag if the Rollback Prepare happens after the
restart. Then, we have to introduce separate flags to distinguish
prepare/commit prepared/rollback prepared to distinguish multiple
operations sent as protocol messages. Also, all these operations are
mutually exclusive so it will be better to send separate messages for
each of these and I have changed it accordingly in the attached patch.

While looking at the two-phase protocol messages (with a view to
documenting them) I noticed that the messages for
LOGICAL_REP_MSG_PREPARE, LOGICAL_REP_MSG_COMMIT_PREPARED,
LOGICAL_REP_MSG_ROLLBACK_PREPARED are all sending and receiving flag
bytes which *always* has a value 0.

----------
e.g.
uint8 flags = 0;
pq_sendbyte(out, flags);

and
/* read flags */
uint8 flags = pq_getmsgbyte(in);
if (flags != 0)
elog(ERROR, "unrecognized flags %u in commit prepare message", flags);
----------

I think this patch version v31 is where the flags became redundant.

I think this has been kept for future use similar to how we have in
logicalrep_write_commit. So, I think we can keep them unused for now.
We can document it similar commit message ('C') [1].

[1] - https://www.postgresql.org/docs/devel/protocol-logicalrep-message-formats.html

Yeah, we can do that. And if nobody else gives feedback about this
then I will do exactly like you suggested.

But I don't understand why we are even trying to "future proof" the
protocol by keeping redundant flags lying around on the off-chance
that maybe one day they could be useful.

Isn't that what the protocol version number is for? e.g. If there did
become some future need for some flags then just add them at that time
and bump the protocol version.

And, even if we wanted to, I think we cannot use these existing flags
in future without bumping the protocol version, because the current
protocol docs say that flag value must be zero!

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#305Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#301)
2 attachment(s)

Please find attached the latest patch set v72*

Differences from v71* are:

* Rebased to HEAD @ yesterday.

* The Replication protocol version requirement for two-phase message
support is bumped to version 3

* Documentation of protocol messages has be updated for two-phase
messages similar to [1]https://github.com/postgres/postgres/commit/15c1a9d9cb7604472d4823f48b64cdc02c441194

----
[1]: https://github.com/postgres/postgres/commit/15c1a9d9cb7604472d4823f48b64cdc02c441194

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v72-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v72-0001-Add-support-for-prepared-transactions-to-built-i.patch
v72-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v72-0002-Add-prepare-API-support-for-streaming-transactio.patch
#306Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#305)
2 attachment(s)

Please find attached the latest patch set v73`*

Differences from v72* are:

* Rebased to HEAD @ today (required because v72-0001 no longer applied cleanly)

* Minor documentation correction for protocol messages for Commit Prepared ('K')

* Non-functional code tidy (mostly proto.c) to reduce overloading
different meanings to same member names for prepare/commit times.

----
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v73-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v73-0002-Add-prepare-API-support-for-streaming-transactio.patch
v73-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v73-0001-Add-support-for-prepared-transactions-to-built-i.patch
#307Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#306)
2 attachment(s)

On Tue, Apr 20, 2021 at 3:45 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v73`*

Differences from v72* are:

* Rebased to HEAD @ today (required because v72-0001 no longer applied cleanly)

* Minor documentation correction for protocol messages for Commit Prepared ('K')

* Non-functional code tidy (mostly proto.c) to reduce overloading
different meanings to same member names for prepare/commit times.

Please find attached a re-posting of patch set v73*

This is the same as yesterday's v73 but with a contrib module compile
error fixed.

(I have confirmed make check-world is OK for this patch set)

------
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v73-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v73-0001-Add-support-for-prepared-transactions-to-built-i.patch
v73-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v73-0002-Add-prepare-API-support-for-streaming-transactio.patch
#308vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#307)

On Wed, Apr 21, 2021 at 12:13 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Tue, Apr 20, 2021 at 3:45 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v73`*

Differences from v72* are:

* Rebased to HEAD @ today (required because v72-0001 no longer applied cleanly)

* Minor documentation correction for protocol messages for Commit Prepared ('K')

* Non-functional code tidy (mostly proto.c) to reduce overloading
different meanings to same member names for prepare/commit times.

Please find attached a re-posting of patch set v73*

This is the same as yesterday's v73 but with a contrib module compile
error fixed.

Thanks for the updated patch, few comments:
1) Should "final_lsn not set in begin message" be "prepare_lsn not set
in begin message"
+logicalrep_read_begin_prepare(StringInfo in,
LogicalRepPreparedTxnData *begin_data)
+{
+       /* read fields */
+       begin_data->prepare_lsn = pq_getmsgint64(in);
+       if (begin_data->prepare_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "final_lsn not set in begin message");
2) Should "These commands" be "ALTER SUBSCRIPTION ... REFRESH
PUBLICATION and ALTER SUBSCRIPTION ... SET/ADD PUBLICATION ..." as
copy_data cannot be specified with alter subscription .. drop
publication.
+   These commands also cannot be executed with <literal>copy_data =
true</literal>
+   when the subscription has <literal>two_phase</literal> commit enabled. See
+   column <literal>subtwophasestate</literal> of
+   <xref linkend="catalog-pg-subscription"/> to know the actual
two-phase state.
3) <term>Byte1('A')</term> should be <term>Byte1('r')</term> as we
have defined LOGICAL_REP_MSG_ROLLBACK_PREPARED as r.
+<term>Rollback Prepared</term>
+<listitem>
+<para>
+
+<variablelist>
+
+<varlistentry>
+<term>Byte1('A')</term>
+<listitem><para>
+                Identifies this message as the rollback of a
two-phase transaction message.
+</para></listitem>
+</varlistentry>
4) Should "Check if the prepared transaction with the given GID and
lsn is around." be
"Check if the prepared transaction with the given GID, lsn & timestamp
is around."
+/*
+ * LookupGXact
+ *             Check if the prepared transaction with the given GID
and lsn is around.
+ *
+ * Note that we always compare with the LSN where prepare ends because that is
+ * what is stored as origin_lsn in the 2PC file.
+ *
+ * This function is primarily used to check if the prepared transaction
+ * received from the upstream (remote node) already exists. Checking only GID
+ * is not sufficient because a different prepared xact with the same GID can
+ * exist on the same node. So, we are ensuring to match origin_lsn and
+ * origin_timestamp of prepared xact to avoid the possibility of a match of
+ * prepared xact from two different nodes.
+ */
5) Should we change "The LSN of the prepare." to "The LSN of the begin prepare."
+<term>Begin Prepare</term>
+<listitem>
+<para>
+
+<variablelist>
+
+<varlistentry>
+<term>Byte1('b')</term>
+<listitem><para>
+                Identifies this message as the beginning of a
two-phase transaction message.
+</para></listitem>
+</varlistentry>
+
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The LSN of the prepare.
+</para></listitem>
+</varlistentry>

6) Similarly in cases of "Commit Prepared" and "Rollback Prepared"

7) No need to initialize has_subrels as we will always assign the
value returned by HeapTupleIsValid
+HasSubscriptionRelations(Oid subid)
+{
+       Relation        rel;
+       int                     nkeys = 0;
+       ScanKeyData skey[2];
+       SysScanDesc scan;
+       bool            has_subrels = false;
+
+       rel = table_open(SubscriptionRelRelationId, AccessShareLock);
8) We could include errhint, like errhint("Option \"two_phase\"
specified more than once") to specify a more informative error
message.
+               else if (strcmp(defel->defname, "two_phase") == 0)
+               {
+                       if (two_phase_option_given)
+                               ereport(ERROR,
+                                               (errcode(ERRCODE_SYNTAX_ERROR),
+                                                errmsg("conflicting
or redundant options")));
+                       two_phase_option_given = true;
+
+                       data->two_phase = defGetBoolean(defel);
+               }
9) We have a lot of function parameters for
parse_subscription_options, should we change it to struct?
@@ -69,7 +69,8 @@ parse_subscription_options(List *options,
                                                   char **synchronous_commit,
                                                   bool *refresh,
                                                   bool *binary_given,
bool *binary,
-                                                  bool
*streaming_given, bool *streaming)
+                                                  bool
*streaming_given, bool *streaming,
+                                                  bool
*twophase_given, bool *twophase)
10) Should we change " errhint("Use ALTER SUBSCRIPTION ...SET
PUBLICATION with refresh = false, or with copy_data = false, or use
DROP/CREATE SUBSCRIPTION.")" to  "errhint("Use ALTER SUBSCRIPTION
...SET/ADD PUBLICATION with refresh = false, or with copy_data =
false.")" as we don't support copy_data in ALTER subscription ... DROP
publication.
+                                       /*
+                                        * See
ALTER_SUBSCRIPTION_REFRESH for details why this is
+                                        * not allowed.
+                                        */
+                                       if (sub->twophasestate ==
LOGICALREP_TWOPHASE_STATE_ENABLED && copy_data)
+                                               ereport(ERROR,
+
(errcode(ERRCODE_SYNTAX_ERROR),
+
errmsg("ALTER SUBSCRIPTION with refresh and copy_data is not allowed
when two_phase is enabled"),
+
errhint("Use ALTER SUBSCRIPTION ...SET PUBLICATION with refresh =
false, or with copy_data = false"
+
          ", or use DROP/CREATE SUBSCRIPTION.")));
11) Should 14000 be 15000 as this feature will be committed in PG15
+               if (options->proto.logical.twophase &&
+                       PQserverVersion(conn->streamConn) >= 140000)
+                       appendStringInfoString(&cmd, ", two_phase 'on'");
12) should we change "begin message" to "begin prepare message"
+       if (begin_data->prepare_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "final_lsn not set in begin message");
+       begin_data->end_lsn = pq_getmsgint64(in);
+       if (begin_data->end_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "end_lsn not set in begin message");
13) should we change "commit prepare message" to "commit prepared message"
+       if (flags != 0)
+               elog(ERROR, "unrecognized flags %u in commit prepare
message", flags);
+
+       /* read fields */
+       prepare_data->commit_lsn = pq_getmsgint64(in);
+       if (prepare_data->commit_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "commit_lsn is not set in commit prepared message");
+       prepare_data->end_lsn = pq_getmsgint64(in);
+       if (prepare_data->end_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "end_lsn is not set in commit prepared message");
+       prepare_data->commit_time = pq_getmsgint64(in);
14) should we change "commit prepared message" to "rollback prepared message"
+void
+logicalrep_read_rollback_prepared(StringInfo in,
+
LogicalRepRollbackPreparedTxnData *rollback_data)
+{
+       /* read flags */
+       uint8           flags = pq_getmsgbyte(in);
+
+       if (flags != 0)
+               elog(ERROR, "unrecognized flags %u in rollback prepare
message", flags);
+
+       /* read fields */
+       rollback_data->prepare_end_lsn = pq_getmsgint64(in);
+       if (rollback_data->prepare_end_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "prepare_end_lsn is not set in commit
prepared message");
+       rollback_data->rollback_end_lsn = pq_getmsgint64(in);
+       if (rollback_data->rollback_end_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "rollback_end_lsn is not set in commit
prepared message");
+       rollback_data->prepare_time = pq_getmsgint64(in);
+       rollback_data->rollback_time = pq_getmsgint64(in);
+       rollback_data->xid = pq_getmsgint(in, 4);
+
+       /* read gid (copy it into a pre-allocated buffer) */
+       strcpy(rollback_data->gid, pq_getmsgstring(in));
+}
15) We can include check  pg_stat_replication_slots to verify if
statistics is getting updated.
+$node_publisher->safe_psql('postgres', "
+       BEGIN;
+       INSERT INTO tab_full VALUES (11);
+       PREPARE TRANSACTION 'test_prepared_tab_full';");
+
+$node_publisher->wait_for_catchup($appname);
+
+# check that transaction is in prepared state on subscriber
+my $result = $node_subscriber->safe_psql('postgres', "SELECT count(*)
FROM pg_prepared_xacts;");
+is($result, qq(1), 'transaction is prepared on subscriber');
+
+# check that 2PC gets committed on subscriber
+$node_publisher->safe_psql('postgres', "COMMIT PREPARED
'test_prepared_tab_full';");
+
+$node_publisher->wait_for_catchup($appname);

Regards,
Vignesh

#309vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#307)

On Wed, Apr 21, 2021 at 12:13 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Tue, Apr 20, 2021 at 3:45 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v73`*

Differences from v72* are:

* Rebased to HEAD @ today (required because v72-0001 no longer applied cleanly)

* Minor documentation correction for protocol messages for Commit Prepared ('K')

* Non-functional code tidy (mostly proto.c) to reduce overloading
different meanings to same member names for prepare/commit times.

Please find attached a re-posting of patch set v73*

Few comments when I was having a look at the tests added:
1) Can the below:
+# check inserts are visible. 22 should be rolled back. 21 should be committed.
+$result = $node_subscriber->safe_psql('postgres', "SELECT count(*)
FROM tab_full where a IN (21);");
+is($result, qq(1), 'Rows committed are on the subscriber');
+$result = $node_subscriber->safe_psql('postgres', "SELECT count(*)
FROM tab_full where a IN (22);");
+is($result, qq(0), 'Rows rolled back are not on the subscriber');

be changed to:
$result = $node_subscriber->safe_psql('postgres', "SELECT a FROM
tab_full where a IN (21,22);");
is($result, qq(21), 'Rows committed are on the subscriber');

And Test count need to be reduced to "use Test::More tests => 19"

2) we can change tx to transaction:
+# check the tx state is prepared on subscriber(s)
+$result = $node_B->safe_psql('postgres', "SELECT count(*) FROM
pg_prepared_xacts;");
+is($result, qq(1), 'transaction is prepared on subscriber B');
+$result = $node_C->safe_psql('postgres', "SELECT count(*) FROM
pg_prepared_xacts;");
+is($result, qq(1), 'transaction is prepared on subscriber C');

3) There are few more instances present in the same file, those also
can be changed.

4) Can the below:
check inserts are visible at subscriber(s).
# 22 should be rolled back.
# 21 should be committed.
$result = $node_B->safe_psql('postgres', "SELECT count(*) FROM
tab_full where a IN (21);");
is($result, qq(1), 'Rows committed are present on subscriber B');
$result = $node_B->safe_psql('postgres', "SELECT count(*) FROM
tab_full where a IN (22);");
is($result, qq(0), 'Rows rolled back are not present on subscriber B');
$result = $node_C->safe_psql('postgres', "SELECT count(*) FROM
tab_full where a IN (21);");
is($result, qq(1), 'Rows committed are present on subscriber C');
$result = $node_C->safe_psql('postgres', "SELECT count(*) FROM
tab_full where a IN (22);");
is($result, qq(0), 'Rows rolled back are not present on subscriber C');

be changed to:
$result = $node_B->safe_psql('postgres', "SELECT a FROM tab_full where
a IN (21,22);");
is($result, qq(21), 'Rows committed are on the subscriber');
$result = $node_C->safe_psql('postgres', "SELECT a FROM tab_full where
a IN (21,22);");
is($result, qq(21), 'Rows committed are on the subscriber');

And Test count need to be reduced to "use Test::More tests => 27"

5) should we change "Two phase commit" to "Two phase commit state" :
+               /*
+                * Binary, streaming, and two_phase are only supported
in v14 and
+                * higher
+                */
                if (pset.sversion >= 140000)
                        appendPQExpBuffer(&buf,
                                                          ", subbinary
AS \"%s\"\n"
-                                                         ", substream
AS \"%s\"\n",
+                                                         ", substream
AS \"%s\"\n"
+                                                         ",
subtwophasestate AS \"%s\"\n",

gettext_noop("Binary"),
-
gettext_noop("Streaming"));
+
gettext_noop("Streaming"),
+
gettext_noop("Two phase commit"));

Regards,
Vignesh

#310vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#307)

On Wed, Apr 21, 2021 at 12:13 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Tue, Apr 20, 2021 at 3:45 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v73`*

Differences from v72* are:

* Rebased to HEAD @ today (required because v72-0001 no longer applied cleanly)

* Minor documentation correction for protocol messages for Commit Prepared ('K')

* Non-functional code tidy (mostly proto.c) to reduce overloading
different meanings to same member names for prepare/commit times.

Please find attached a re-posting of patch set v73*

This is the same as yesterday's v73 but with a contrib module compile
error fixed.

Few comments on
v73-0002-Add-prepare-API-support-for-streaming-transactio.patch patch:
1) There are slight differences in error message in case of Alter
subscription ... drop publication, we can keep the error message
similar:
postgres=# ALTER SUBSCRIPTION mysub drop PUBLICATION mypub WITH
(refresh = false, copy_data=true, two_phase=true);
ERROR: unrecognized subscription parameter: "copy_data"
postgres=# ALTER SUBSCRIPTION mysub drop PUBLICATION mypub WITH
(refresh = false, two_phase=true, streaming=true);
ERROR: cannot alter two_phase option

2) We are sending txn->xid twice, I felt we should send only once in
logicalrep_write_stream_prepare:
+       /* transaction ID */
+       Assert(TransactionIdIsValid(txn->xid));
+       pq_sendint32(out, txn->xid);
+
+       /* send the flags field */
+       pq_sendbyte(out, flags);
+
+       /* send fields */
+       pq_sendint64(out, prepare_lsn);
+       pq_sendint64(out, txn->end_lsn);
+       pq_sendint64(out, txn->u_op_time.prepare_time);
+       pq_sendint32(out, txn->xid);
+
3) We could remove xid and return prepare_data->xid
+TransactionId
+logicalrep_read_stream_prepare(StringInfo in,
LogicalRepPreparedTxnData *prepare_data)
+{
+       TransactionId xid;
+       uint8           flags;
+
+       xid = pq_getmsgint(in, 4);
4) Here comments can be above apply_spooled_messages for better readability
+       /*
+        * 1. Replay all the spooled operations - Similar code as for
+        * apply_handle_stream_commit (i.e. non two-phase stream commit)
+        */
+
+       ensure_transaction();
+
+       nchanges = apply_spooled_messages(xid, prepare_data.prepare_lsn);
+
5) Similarly this below comment can be above PrepareTransactionBlock
+       /*
+        * 2. Mark the transaction as prepared. - Similar code as for
+        * apply_handle_prepare (i.e. two-phase non-streamed prepare)
+        */
+
+       /*
+        * BeginTransactionBlock is necessary to balance the EndTransactionBlock
+        * called within the PrepareTransactionBlock below.
+        */
+       BeginTransactionBlock();
+       CommitTransactionCommand();
+
+       /*
+        * Update origin state so we can restart streaming from correct position
+        * in case of crash.
+        */
+       replorigin_session_origin_lsn = prepare_data.end_lsn;
+       replorigin_session_origin_timestamp = prepare_data.prepare_time;
+
+       PrepareTransactionBlock(gid);
+       CommitTransactionCommand();
+
+       pgstat_report_stat(false);
6) There is a lot of common code between apply_handle_stream_prepare
and apply_handle_prepare, if possible try to have a common function to
avoid fixing at both places.
+       /*
+        * 2. Mark the transaction as prepared. - Similar code as for
+        * apply_handle_prepare (i.e. two-phase non-streamed prepare)
+        */
+
+       /*
+        * BeginTransactionBlock is necessary to balance the EndTransactionBlock
+        * called within the PrepareTransactionBlock below.
+        */
+       BeginTransactionBlock();
+       CommitTransactionCommand();
+
+       /*
+        * Update origin state so we can restart streaming from correct position
+        * in case of crash.
+        */
+       replorigin_session_origin_lsn = prepare_data.end_lsn;
+       replorigin_session_origin_timestamp = prepare_data.prepare_time;
+
+       PrepareTransactionBlock(gid);
+       CommitTransactionCommand();
+
+       pgstat_report_stat(false);
+
+       store_flush_position(prepare_data.end_lsn);
7) two-phase commit is slightly misleading, we can just mention
streaming prepare.
+ * PREPARE callback (for streaming two-phase commit).
+ *
+ * Notify the downstream to prepare the transaction.
+ */
+static void
+pgoutput_stream_prepare_txn(LogicalDecodingContext *ctx,
+                                                       ReorderBufferTXN *txn,
+                                                       XLogRecPtr prepare_lsn)
8) should we include Assert of in_streaming similar to other
pgoutput_stream*** functions.
+static void
+pgoutput_stream_prepare_txn(LogicalDecodingContext *ctx,
+                                                       ReorderBufferTXN *txn,
+                                                       XLogRecPtr prepare_lsn)
+{
+       Assert(rbtxn_is_streamed(txn));
+
+       OutputPluginUpdateProgress(ctx);
+       OutputPluginPrepareWrite(ctx, true);
+       logicalrep_write_stream_prepare(ctx->out, txn, prepare_lsn);
+       OutputPluginWrite(ctx, true);
+}
9) Here also, we can verify that the transaction is streamed by
checking the pg_stat_replication_slots.
+# check that transaction is committed on subscriber
+$result = $node_subscriber->safe_psql('postgres', "SELECT count(*),
count(c), count(d = 999) FROM test_tab");
+is($result, qq(3334|3334|3334), 'Rows inserted by 2PC have committed
on subscriber, and extra columns contain local defaults');
+$result = $node_subscriber->safe_psql('postgres', "SELECT count(*)
FROM pg_prepared_xacts;");
+is($result, qq(0), 'transaction is committed on subscriber');

Regards,
Vignesh

#311Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: vignesh C (#310)

Modified pgbench's "tpcb-like" builtin query as below to do two-phase
commits and then ran a 4 cascade replication setup.

"BEGIN;\n"
"UPDATE pgbench_accounts SET abalance = abalance + :delta
WHERE aid = :aid;\n"
"SELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n"
"UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE
tid = :tid;\n"
"UPDATE pgbench_branches SET bbalance = bbalance + :delta
WHERE bid = :bid;\n"
"INSERT INTO pgbench_history (tid, bid, aid, delta, mtime)
VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n"
"PREPARE TRANSACTION ':aid:';\n"
"COMMIT PREPARED ':aid:';\n"

The tests ran fine and all 4 cascaded servers replicated the changes
correctly. All the subscriptions were configured with two_phase
enabled.

regards,
Ajin Cherian
Fujitsu Australia

#312Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#307)
2 attachment(s)

Attachments:

v74-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v74-0002-Add-prepare-API-support-for-streaming-transactio.patch
v74-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v74-0001-Add-support-for-prepared-transactions-to-built-i.patch
#313Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#308)

On Mon, Apr 26, 2021 at 9:22 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, Apr 21, 2021 at 12:13 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Tue, Apr 20, 2021 at 3:45 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v73`*

Differences from v72* are:

* Rebased to HEAD @ today (required because v72-0001 no longer applied cleanly)

* Minor documentation correction for protocol messages for Commit Prepared ('K')

* Non-functional code tidy (mostly proto.c) to reduce overloading
different meanings to same member names for prepare/commit times.

Please find attached a re-posting of patch set v73*

This is the same as yesterday's v73 but with a contrib module compile
error fixed.

Thanks for the updated patch, few comments:

Thanks for your feedback comments, My replies are inline below.

1) Should "final_lsn not set in begin message" be "prepare_lsn not set
in begin message"
+logicalrep_read_begin_prepare(StringInfo in,
LogicalRepPreparedTxnData *begin_data)
+{
+       /* read fields */
+       begin_data->prepare_lsn = pq_getmsgint64(in);
+       if (begin_data->prepare_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "final_lsn not set in begin message");

OK. Updated in v74.

2) Should "These commands" be "ALTER SUBSCRIPTION ... REFRESH
PUBLICATION and ALTER SUBSCRIPTION ... SET/ADD PUBLICATION ..." as
copy_data cannot be specified with alter subscription .. drop
publication.
+   These commands also cannot be executed with <literal>copy_data =
true</literal>
+   when the subscription has <literal>two_phase</literal> commit enabled. See
+   column <literal>subtwophasestate</literal> of
+   <xref linkend="catalog-pg-subscription"/> to know the actual
two-phase state.

OK. Updated in v74. While technically more correct, I think rewording
it as suggested makes the doc harder to understand. But I have
reworded it slightly to account for the fact that the copy_data
setting is not possible with the DROP.

3) <term>Byte1('A')</term> should be <term>Byte1('r')</term> as we
have defined LOGICAL_REP_MSG_ROLLBACK_PREPARED as r.
+<term>Rollback Prepared</term>
+<listitem>
+<para>
+
+<variablelist>
+
+<varlistentry>
+<term>Byte1('A')</term>
+<listitem><para>
+                Identifies this message as the rollback of a
two-phase transaction message.
+</para></listitem>
+</varlistentry>

OK. Updated in v74.

4) Should "Check if the prepared transaction with the given GID and
lsn is around." be
"Check if the prepared transaction with the given GID, lsn & timestamp
is around."
+/*
+ * LookupGXact
+ *             Check if the prepared transaction with the given GID
and lsn is around.
+ *
+ * Note that we always compare with the LSN where prepare ends because that is
+ * what is stored as origin_lsn in the 2PC file.
+ *
+ * This function is primarily used to check if the prepared transaction
+ * received from the upstream (remote node) already exists. Checking only GID
+ * is not sufficient because a different prepared xact with the same GID can
+ * exist on the same node. So, we are ensuring to match origin_lsn and
+ * origin_timestamp of prepared xact to avoid the possibility of a match of
+ * prepared xact from two different nodes.
+ */

OK. Updated in v74.

5) Should we change "The LSN of the prepare." to "The LSN of the begin prepare."
+<term>Begin Prepare</term>
+<listitem>
+<para>
+
+<variablelist>
+
+<varlistentry>
+<term>Byte1('b')</term>
+<listitem><para>
+                Identifies this message as the beginning of a
two-phase transaction message.
+</para></listitem>
+</varlistentry>
+
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The LSN of the prepare.
+</para></listitem>
+</varlistentry>

Not updated. The PG Docs is correct as-is I think.

6) Similarly in cases of "Commit Prepared" and "Rollback Prepared"

Not updated. AFAIK these are correct – it really is LSN of the PREPARE
just like it says.

7) No need to initialize has_subrels as we will always assign the
value returned by HeapTupleIsValid
+HasSubscriptionRelations(Oid subid)
+{
+       Relation        rel;
+       int                     nkeys = 0;
+       ScanKeyData skey[2];
+       SysScanDesc scan;
+       bool            has_subrels = false;
+
+       rel = table_open(SubscriptionRelRelationId, AccessShareLock);

OK. Updated in v74.

8) We could include errhint, like errhint("Option \"two_phase\"
specified more than once") to specify a more informative error
message.
+               else if (strcmp(defel->defname, "two_phase") == 0)
+               {
+                       if (two_phase_option_given)
+                               ereport(ERROR,
+                                               (errcode(ERRCODE_SYNTAX_ERROR),
+                                                errmsg("conflicting
or redundant options")));
+                       two_phase_option_given = true;
+
+                       data->two_phase = defGetBoolean(defel);
+               }

Not updated. Yes, maybe it would be better like you say, but the code
would then be inconsistent with every other option in this function.
Perhaps your idea can be raised as a separate patch to fix all of
them.

9) We have a lot of function parameters for
parse_subscription_options, should we change it to struct?
@@ -69,7 +69,8 @@ parse_subscription_options(List *options,
char **synchronous_commit,
bool *refresh,
bool *binary_given,
bool *binary,
-                                                  bool
*streaming_given, bool *streaming)
+                                                  bool
*streaming_given, bool *streaming,
+                                                  bool
*twophase_given, bool *twophase)

Not updated. This is not really related to the 2PC functionality so I
think your idea might be good, but it should be done as a later
refactoring patch after the 2PC patch is pushed.

10) Should we change " errhint("Use ALTER SUBSCRIPTION ...SET
PUBLICATION with refresh = false, or with copy_data = false, or use
DROP/CREATE SUBSCRIPTION.")" to  "errhint("Use ALTER SUBSCRIPTION
...SET/ADD PUBLICATION with refresh = false, or with copy_data =
false.")" as we don't support copy_data in ALTER subscription ... DROP
publication.
+                                       /*
+                                        * See
ALTER_SUBSCRIPTION_REFRESH for details why this is
+                                        * not allowed.
+                                        */
+                                       if (sub->twophasestate ==
LOGICALREP_TWOPHASE_STATE_ENABLED && copy_data)
+                                               ereport(ERROR,
+
(errcode(ERRCODE_SYNTAX_ERROR),
+
errmsg("ALTER SUBSCRIPTION with refresh and copy_data is not allowed
when two_phase is enabled"),
+
errhint("Use ALTER SUBSCRIPTION ...SET PUBLICATION with refresh =
false, or with copy_data = false"
+
", or use DROP/CREATE SUBSCRIPTION.")));

Not updated. The hint is saying that one workaround is to DROP and
re-CREATE the SUBSCRIPTIPON. It doesn’t say anything about “support of
copy_data in ALTER subscription ... DROP publication.” So I did not
understand the point of your comment.

11) Should 14000 be 15000 as this feature will be committed in PG15
+               if (options->proto.logical.twophase &&
+                       PQserverVersion(conn->streamConn) >= 140000)
+                       appendStringInfoString(&cmd, ", two_phase 'on'");

Not updated. This is already a known TODO task; I will do this as soon
as PG15 development starts.

12) should we change "begin message" to "begin prepare message"
+       if (begin_data->prepare_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "final_lsn not set in begin message");
+       begin_data->end_lsn = pq_getmsgint64(in);
+       if (begin_data->end_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "end_lsn not set in begin message");

OK. Updated in v74.

13) should we change "commit prepare message" to "commit prepared message"
+       if (flags != 0)
+               elog(ERROR, "unrecognized flags %u in commit prepare
message", flags);
+
+       /* read fields */
+       prepare_data->commit_lsn = pq_getmsgint64(in);
+       if (prepare_data->commit_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "commit_lsn is not set in commit prepared message");
+       prepare_data->end_lsn = pq_getmsgint64(in);
+       if (prepare_data->end_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "end_lsn is not set in commit prepared message");
+       prepare_data->commit_time = pq_getmsgint64(in);

OK, updated in v74

14) should we change "commit prepared message" to "rollback prepared message"
+void
+logicalrep_read_rollback_prepared(StringInfo in,
+
LogicalRepRollbackPreparedTxnData *rollback_data)
+{
+       /* read flags */
+       uint8           flags = pq_getmsgbyte(in);
+
+       if (flags != 0)
+               elog(ERROR, "unrecognized flags %u in rollback prepare
message", flags);
+
+       /* read fields */
+       rollback_data->prepare_end_lsn = pq_getmsgint64(in);
+       if (rollback_data->prepare_end_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "prepare_end_lsn is not set in commit
prepared message");
+       rollback_data->rollback_end_lsn = pq_getmsgint64(in);
+       if (rollback_data->rollback_end_lsn == InvalidXLogRecPtr)
+               elog(ERROR, "rollback_end_lsn is not set in commit
prepared message");
+       rollback_data->prepare_time = pq_getmsgint64(in);
+       rollback_data->rollback_time = pq_getmsgint64(in);
+       rollback_data->xid = pq_getmsgint(in, 4);
+
+       /* read gid (copy it into a pre-allocated buffer) */
+       strcpy(rollback_data->gid, pq_getmsgstring(in));
+}

OK. Updated in v74.

15) We can include check  pg_stat_replication_slots to verify if
statistics is getting updated.
+$node_publisher->safe_psql('postgres', "
+       BEGIN;
+       INSERT INTO tab_full VALUES (11);
+       PREPARE TRANSACTION 'test_prepared_tab_full';");
+
+$node_publisher->wait_for_catchup($appname);
+
+# check that transaction is in prepared state on subscriber
+my $result = $node_subscriber->safe_psql('postgres', "SELECT count(*)
FROM pg_prepared_xacts;");
+is($result, qq(1), 'transaction is prepared on subscriber');
+
+# check that 2PC gets committed on subscriber
+$node_publisher->safe_psql('postgres', "COMMIT PREPARED
'test_prepared_tab_full';");
+
+$node_publisher->wait_for_catchup($appname);

Not updated. But I recorded this as a TODO task - I agree we need to
introduce some stats tests later.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#314Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#309)

On Tue, Apr 27, 2021 at 1:41 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, Apr 21, 2021 at 12:13 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Tue, Apr 20, 2021 at 3:45 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v73`*

Differences from v72* are:

* Rebased to HEAD @ today (required because v72-0001 no longer applied cleanly)

* Minor documentation correction for protocol messages for Commit Prepared ('K')

* Non-functional code tidy (mostly proto.c) to reduce overloading
different meanings to same member names for prepare/commit times.

Please find attached a re-posting of patch set v73*

Few comments when I was having a look at the tests added:

Thanks for your feedback comments. My replies are inline below.

1) Can the below:
+# check inserts are visible. 22 should be rolled back. 21 should be committed.
+$result = $node_subscriber->safe_psql('postgres', "SELECT count(*)
FROM tab_full where a IN (21);");
+is($result, qq(1), 'Rows committed are on the subscriber');
+$result = $node_subscriber->safe_psql('postgres', "SELECT count(*)
FROM tab_full where a IN (22);");
+is($result, qq(0), 'Rows rolled back are not on the subscriber');

be changed to:
$result = $node_subscriber->safe_psql('postgres', "SELECT a FROM
tab_full where a IN (21,22);");
is($result, qq(21), 'Rows committed are on the subscriber');

And Test count need to be reduced to "use Test::More tests => 19"

OK. Updated in v74.

2) we can change tx to transaction:
+# check the tx state is prepared on subscriber(s)
+$result = $node_B->safe_psql('postgres', "SELECT count(*) FROM
pg_prepared_xacts;");
+is($result, qq(1), 'transaction is prepared on subscriber B');
+$result = $node_C->safe_psql('postgres', "SELECT count(*) FROM
pg_prepared_xacts;");
+is($result, qq(1), 'transaction is prepared on subscriber C');

OK. Updated in v74

3) There are few more instances present in the same file, those also
can be changed.

OK. I found no others in the same file, but there were similar cases
in the 021 TAP test. Those were also updated in v74/

4) Can the below:
check inserts are visible at subscriber(s).
# 22 should be rolled back.
# 21 should be committed.
$result = $node_B->safe_psql('postgres', "SELECT count(*) FROM
tab_full where a IN (21);");
is($result, qq(1), 'Rows committed are present on subscriber B');
$result = $node_B->safe_psql('postgres', "SELECT count(*) FROM
tab_full where a IN (22);");
is($result, qq(0), 'Rows rolled back are not present on subscriber B');
$result = $node_C->safe_psql('postgres', "SELECT count(*) FROM
tab_full where a IN (21);");
is($result, qq(1), 'Rows committed are present on subscriber C');
$result = $node_C->safe_psql('postgres', "SELECT count(*) FROM
tab_full where a IN (22);");
is($result, qq(0), 'Rows rolled back are not present on subscriber C');

be changed to:
$result = $node_B->safe_psql('postgres', "SELECT a FROM tab_full where
a IN (21,22);");
is($result, qq(21), 'Rows committed are on the subscriber');
$result = $node_C->safe_psql('postgres', "SELECT a FROM tab_full where
a IN (21,22);");
is($result, qq(21), 'Rows committed are on the subscriber');

And Test count need to be reduced to "use Test::More tests => 27"

OK. Updated in v74.

5) should we change "Two phase commit" to "Two phase commit state" :
+               /*
+                * Binary, streaming, and two_phase are only supported
in v14 and
+                * higher
+                */
if (pset.sversion >= 140000)
appendPQExpBuffer(&buf,
", subbinary
AS \"%s\"\n"
-                                                         ", substream
AS \"%s\"\n",
+                                                         ", substream
AS \"%s\"\n"
+                                                         ",
subtwophasestate AS \"%s\"\n",

gettext_noop("Binary"),
-
gettext_noop("Streaming"));
+
gettext_noop("Streaming"),
+
gettext_noop("Two phase commit"));

Not updated. I think the column name is already the longest one and
this just makes it even longer - far too long IMO. I am not sure what
is better having the “state” suffix. After all, booleans are also
states. Anyway, I did not make this change now but if people feel
strongly about it then I can revisit it.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#315Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#310)

On Tue, Apr 27, 2021 at 6:17 PM vignesh C <vignesh21@gmail.com> wrote:

On Wed, Apr 21, 2021 at 12:13 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Tue, Apr 20, 2021 at 3:45 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v73`*

Differences from v72* are:

* Rebased to HEAD @ today (required because v72-0001 no longer applied cleanly)

* Minor documentation correction for protocol messages for Commit Prepared ('K')

* Non-functional code tidy (mostly proto.c) to reduce overloading
different meanings to same member names for prepare/commit times.

Please find attached a re-posting of patch set v73*

This is the same as yesterday's v73 but with a contrib module compile
error fixed.

Few comments on
v73-0002-Add-prepare-API-support-for-streaming-transactio.patch patch:

Thanks for your feedback comments. My replies are inline below.

1) There are slight differences in error message in case of Alter
subscription ... drop publication, we can keep the error message
similar:
postgres=# ALTER SUBSCRIPTION mysub drop PUBLICATION mypub WITH
(refresh = false, copy_data=true, two_phase=true);
ERROR: unrecognized subscription parameter: "copy_data"
postgres=# ALTER SUBSCRIPTION mysub drop PUBLICATION mypub WITH
(refresh = false, two_phase=true, streaming=true);
ERROR: cannot alter two_phase option

OK. Updated in v74.

2) We are sending txn->xid twice, I felt we should send only once in
logicalrep_write_stream_prepare:
+       /* transaction ID */
+       Assert(TransactionIdIsValid(txn->xid));
+       pq_sendint32(out, txn->xid);
+
+       /* send the flags field */
+       pq_sendbyte(out, flags);
+
+       /* send fields */
+       pq_sendint64(out, prepare_lsn);
+       pq_sendint64(out, txn->end_lsn);
+       pq_sendint64(out, txn->u_op_time.prepare_time);
+       pq_sendint32(out, txn->xid);
+

OK. Updated in v74.

3) We could remove xid and return prepare_data->xid
+TransactionId
+logicalrep_read_stream_prepare(StringInfo in,
LogicalRepPreparedTxnData *prepare_data)
+{
+       TransactionId xid;
+       uint8           flags;
+
+       xid = pq_getmsgint(in, 4);

OK. Updated in v74.

4) Here comments can be above apply_spooled_messages for better readability
+       /*
+        * 1. Replay all the spooled operations - Similar code as for
+        * apply_handle_stream_commit (i.e. non two-phase stream commit)
+        */
+
+       ensure_transaction();
+
+       nchanges = apply_spooled_messages(xid, prepare_data.prepare_lsn);
+

Not done. It was deliberately commented this way because the part
below the comment is what is in apply_handle_stream_commit.

5) Similarly this below comment can be above PrepareTransactionBlock
+       /*
+        * 2. Mark the transaction as prepared. - Similar code as for
+        * apply_handle_prepare (i.e. two-phase non-streamed prepare)
+        */
+
+       /*
+        * BeginTransactionBlock is necessary to balance the EndTransactionBlock
+        * called within the PrepareTransactionBlock below.
+        */
+       BeginTransactionBlock();
+       CommitTransactionCommand();
+
+       /*
+        * Update origin state so we can restart streaming from correct position
+        * in case of crash.
+        */
+       replorigin_session_origin_lsn = prepare_data.end_lsn;
+       replorigin_session_origin_timestamp = prepare_data.prepare_time;
+
+       PrepareTransactionBlock(gid);
+       CommitTransactionCommand();
+
+       pgstat_report_stat(false);

Not done. It is deliberately commented this way because the part below
the comment is what is in apply_handle_prepare.

6) There is a lot of common code between apply_handle_stream_prepare
and apply_handle_prepare, if possible try to have a common function to
avoid fixing at both places.
+       /*
+        * 2. Mark the transaction as prepared. - Similar code as for
+        * apply_handle_prepare (i.e. two-phase non-streamed prepare)
+        */
+
+       /*
+        * BeginTransactionBlock is necessary to balance the EndTransactionBlock
+        * called within the PrepareTransactionBlock below.
+        */
+       BeginTransactionBlock();
+       CommitTransactionCommand();
+
+       /*
+        * Update origin state so we can restart streaming from correct position
+        * in case of crash.
+        */
+       replorigin_session_origin_lsn = prepare_data.end_lsn;
+       replorigin_session_origin_timestamp = prepare_data.prepare_time;
+
+       PrepareTransactionBlock(gid);
+       CommitTransactionCommand();
+
+       pgstat_report_stat(false);
+
+       store_flush_position(prepare_data.end_lsn);

Not done. If you diff those functions there are really only ~ 10
statements in common so I felt it is more readable to keep it this way
than to try to make a “common” function out of an arbitrary code
fragment.

7) two-phase commit is slightly misleading, we can just mention
streaming prepare.
+ * PREPARE callback (for streaming two-phase commit).
+ *
+ * Notify the downstream to prepare the transaction.
+ */
+static void
+pgoutput_stream_prepare_txn(LogicalDecodingContext *ctx,
+                                                       ReorderBufferTXN *txn,
+                                                       XLogRecPtr prepare_lsn)

OK. Updated in v74.

8) should we include Assert of in_streaming similar to other
pgoutput_stream*** functions.
+static void
+pgoutput_stream_prepare_txn(LogicalDecodingContext *ctx,
+                                                       ReorderBufferTXN *txn,
+                                                       XLogRecPtr prepare_lsn)
+{
+       Assert(rbtxn_is_streamed(txn));
+
+       OutputPluginUpdateProgress(ctx);
+       OutputPluginPrepareWrite(ctx, true);
+       logicalrep_write_stream_prepare(ctx->out, txn, prepare_lsn);
+       OutputPluginWrite(ctx, true);
+}

Not done. AFAIK it is correct as-is.

9) Here also, we can verify that the transaction is streamed by
checking the pg_stat_replication_slots.
+# check that transaction is committed on subscriber
+$result = $node_subscriber->safe_psql('postgres', "SELECT count(*),
count(c), count(d = 999) FROM test_tab");
+is($result, qq(3334|3334|3334), 'Rows inserted by 2PC have committed
on subscriber, and extra columns contain local defaults');
+$result = $node_subscriber->safe_psql('postgres', "SELECT count(*)
FROM pg_prepared_xacts;");
+is($result, qq(0), 'transaction is committed on subscriber');

Not done. If the purpose of this comment is just to confirm that the
SQL INSERT of 5000 rows of md5 data exceeds 64K then I think we can
simply take that as self-evident. We don’t need some SQL to confirm
it.

If the purpose of this is just to ensure that stats work properly with
2PC then I agree that there should be some test cases added for stats,
but this has already been recorded elsewhere as a future TODO task.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#316vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#312)

On Thu, Apr 29, 2021 at 2:23 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v74*

Differences from v73* are:

* Rebased to HEAD @ 2 days ago.

* v74 addresses most of the feedback comments from Vignesh posts [1][2][3].

Thanks for the updated patch.
Few comments:
1) I felt skey[2] should be skey as we are just using one key here.

+       ScanKeyData skey[2];
+       SysScanDesc scan;
+       bool            has_subrels;
+
+       rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+       ScanKeyInit(&skey[nkeys++],
+                               Anum_pg_subscription_rel_srsubid,
+                               BTEqualStrategyNumber, F_OIDEQ,
+                               ObjectIdGetDatum(subid));
+
+       scan = systable_beginscan(rel, InvalidOid, false,
+                                                         NULL, nkeys, skey);
+
2) I felt we can change lsn data type from Int64 to XLogRecPtr
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The LSN of the prepare.
+</para></listitem>
+</varlistentry>
+
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The end LSN of the transaction.
+</para></listitem>
+</varlistentry>
3) I felt we can change lsn data type from Int32 to TransactionId
+<varlistentry>
+<term>Int32</term>
+<listitem><para>
+                Xid of the subtransaction (will be same as xid of the
transaction for top-level
+                transactions).
+</para></listitem>
+</varlistentry>
4) Should we change this to "The end LSN of the prepared transaction"
just to avoid any confusion of it meaning commit/rollback.
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The end LSN of the transaction.
+</para></listitem>
+</varlistentry>

Similar problems related to comments 2 and 3 are being discussed at
[1]: /messages/by-id/CAHut+Ps2JsSd_OpBR9kXt1Rt4bwyXAjh875gUpFw6T210ttO7Q@mail.gmail.com
thread.
[1]: /messages/by-id/CAHut+Ps2JsSd_OpBR9kXt1Rt4bwyXAjh875gUpFw6T210ttO7Q@mail.gmail.com

Regards,
Vignesh

#317Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#316)

On Mon, May 10, 2021 at 1:31 PM vignesh C <vignesh21@gmail.com> wrote:

4) Should we change this to "The end LSN of the prepared transaction"
just to avoid any confusion of it meaning commit/rollback.
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The end LSN of the transaction.
+</para></listitem>
+</varlistentry>

Can you please provide more details so I can be sure of the context of
this feedback, e.g. there are multiple places that match that patch
fragment provided. So was this suggestion to change all of them ( 'b',
'P', 'K' , 'r' of patch 0001; and also 'p' of patch 0002) ?

------
Kind Regards,
Peter Smith.
Fujitsu Australia.

#318vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#317)

On Mon, May 10, 2021 at 10:51 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Mon, May 10, 2021 at 1:31 PM vignesh C <vignesh21@gmail.com> wrote:

4) Should we change this to "The end LSN of the prepared transaction"
just to avoid any confusion of it meaning commit/rollback.
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The end LSN of the transaction.
+</para></listitem>
+</varlistentry>

Can you please provide more details so I can be sure of the context of
this feedback, e.g. there are multiple places that match that patch
fragment provided. So was this suggestion to change all of them ( 'b',
'P', 'K' , 'r' of patch 0001; and also 'p' of patch 0002) ?

My suggestion was for all of them.

Regards,
Vignesh

#319Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#318)
2 attachment(s)

Please find attached the latest patch set v75*

Differences from v74* are:

* Rebased to HEAD @ today.

* v75 also addresses some of the feedback comments from Vignesh [1]/messages/by-id/CALDaNm3U4fGxTnQfaT1TqUkgX5c0CSDvmW12Bfksis8zB_XinA@mail.gmail.com.

----
[1]: /messages/by-id/CALDaNm3U4fGxTnQfaT1TqUkgX5c0CSDvmW12Bfksis8zB_XinA@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v75-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v75-0001-Add-support-for-prepared-transactions-to-built-i.patch
v75-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v75-0002-Add-prepare-API-support-for-streaming-transactio.patch
#320Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#316)

On Mon, May 10, 2021 at 1:31 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, Apr 29, 2021 at 2:23 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v74*

Differences from v73* are:

* Rebased to HEAD @ 2 days ago.

* v74 addresses most of the feedback comments from Vignesh posts [1][2][3].

Thanks for the updated patch.
Few comments:
1) I felt skey[2] should be skey as we are just using one key here.

+       ScanKeyData skey[2];
+       SysScanDesc scan;
+       bool            has_subrels;
+
+       rel = table_open(SubscriptionRelRelationId, AccessShareLock);
+
+       ScanKeyInit(&skey[nkeys++],
+                               Anum_pg_subscription_rel_srsubid,
+                               BTEqualStrategyNumber, F_OIDEQ,
+                               ObjectIdGetDatum(subid));
+
+       scan = systable_beginscan(rel, InvalidOid, false,
+                                                         NULL, nkeys, skey);
+

Fixed in v75.

2) I felt we can change lsn data type from Int64 to XLogRecPtr
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The LSN of the prepare.
+</para></listitem>
+</varlistentry>
+
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The end LSN of the transaction.
+</para></listitem>
+</varlistentry>

Deferred.

3) I felt we can change lsn data type from Int32 to TransactionId
+<varlistentry>
+<term>Int32</term>
+<listitem><para>
+                Xid of the subtransaction (will be same as xid of the
transaction for top-level
+                transactions).
+</para></listitem>
+</varlistentry>

Deferred.

4) Should we change this to "The end LSN of the prepared transaction"
just to avoid any confusion of it meaning commit/rollback.
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The end LSN of the transaction.
+</para></listitem>
+</varlistentry>

Modified in v75 for message types 'b', 'P', 'K', 'r', 'p'.

Similar problems related to comments 2 and 3 are being discussed at
[1], we can change it accordingly based on the conclusion in the other
thread.
[1] - /messages/by-id/CAHut+Ps2JsSd_OpBR9kXt1Rt4bwyXAjh875gUpFw6T210ttO7Q@mail.gmail.com

Yes, I will defer addressing those feedback comments 2 and 3 pending
the outcome of your other patch of the above thread.

----------
Kind Regards,
Peter Smith.
Fujitsu Australia

#321Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Peter Smith (#319)
3 attachment(s)

On Thu, May 13, 2021 at 7:50 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v75*

Differences from v74* are:

* Rebased to HEAD @ today.

* v75 also addresses some of the feedback comments from Vignesh [1].

Adding a patch to this patch-set that avoids empty transactions from
being sent to the subscriber/replica. This patch is based on the
logic that was proposed for empty transactions in the thread [1]/messages/by-id/CAFPTHDYegcoS3xjGBj0XHfcdZr6Y35+YG1jq79TBD1VCkK7v3A@mail.gmail.com. This
patch uses that patch and handles empty prepared transactions
as well. So, this will avoid empty prepared transactions from being
sent to the subscriber/replica. This patch also avoids sending
COMMIT PREPARED /ROLLBACK PREPARED if the prepared transaction was
skipped provided the COMMIT /ROLLBACK happens
prior to a restart of the walsender. If the COMMIT/ROLLBACK PREPARED
happens after a restart, it will not be able know that the
prepared transaction prior to the restart was not sent, in this case
the apply worker of the subscription will check if a prepare of the
same type exists
and if it does not, it will silently ignore the COMMIT PREPARED
(ROLLBACK PREPARED logic was already doing this).
Do have a look and let me know if you have any comments.

[1]: /messages/by-id/CAFPTHDYegcoS3xjGBj0XHfcdZr6Y35+YG1jq79TBD1VCkK7v3A@mail.gmail.com

regards,
Ajin Cherian
Fujitsu Australia.

Attachments:

v76-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v76-0001-Add-support-for-prepared-transactions-to-built-i.patch
v76-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v76-0002-Add-prepare-API-support-for-streaming-transactio.patch
v76-0003-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v76-0003-Skip-empty-transactions-for-logical-replication.patch
#322Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Ajin Cherian (#321)
3 attachment(s)

The above patch had some changes missing which resulted in some tap
tests failing. Sending an updated patchset. Keeping the patchset
version the same.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v76-0003-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v76-0003-Skip-empty-transactions-for-logical-replication.patch
v76-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v76-0001-Add-support-for-prepared-transactions-to-built-i.patch
v76-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v76-0002-Add-prepare-API-support-for-streaming-transactio.patch
#323vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Ajin Cherian (#322)

On Mon, May 17, 2021 at 6:10 PM Ajin Cherian <itsajin@gmail.com> wrote:

The above patch had some changes missing which resulted in some tap
tests failing. Sending an updated patchset. Keeping the patchset
version the same.

Thanks for the updated patch, the updated patch fixes the tap test failures.

Regards,
Vignesh

#324Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Ajin Cherian (#321)

On Sun, May 16, 2021 at 12:07 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Thu, May 13, 2021 at 7:50 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v75*

Differences from v74* are:

* Rebased to HEAD @ today.

* v75 also addresses some of the feedback comments from Vignesh [1].

Adding a patch to this patch-set that avoids empty transactions from
being sent to the subscriber/replica. This patch is based on the
logic that was proposed for empty transactions in the thread [1]. This
patch uses that patch and handles empty prepared transactions
as well. So, this will avoid empty prepared transactions from being
sent to the subscriber/replica. This patch also avoids sending
COMMIT PREPARED /ROLLBACK PREPARED if the prepared transaction was
skipped provided the COMMIT /ROLLBACK happens
prior to a restart of the walsender. If the COMMIT/ROLLBACK PREPARED
happens after a restart, it will not be able know that the
prepared transaction prior to the restart was not sent, in this case
the apply worker of the subscription will check if a prepare of the
same type exists
and if it does not, it will silently ignore the COMMIT PREPARED
(ROLLBACK PREPARED logic was already doing this).
Do have a look and let me know if you have any comments.

[1] - /messages/by-id/CAFPTHDYegcoS3xjGBj0XHfcdZr6Y35+YG1jq79TBD1VCkK7v3A@mail.gmail.com

Hi Ajin.

I have applied the latest patch set v76*.

The patches applied cleanly.

All of the make, make check, and TAP subscriptions tests worked OK.

Below are my REVIEW COMMENTS for the v76-0003 part.

==========

1. File: doc/src/sgml/logicaldecoding.sgml

1.1

@@ -862,11 +862,19 @@ typedef void (*LogicalDecodePrepareCB) (struct
LogicalDecodingContext *ctx,
       The required <function>commit_prepared_cb</function> callback is called
       whenever a transaction <command>COMMIT PREPARED</command> has
been decoded.
       The <parameter>gid</parameter> field, which is part of the
-      <parameter>txn</parameter> parameter, can be used in this callback.
+      <parameter>txn</parameter> parameter, can be used in this callback. The
+      parameters <parameter>prepare_end_lsn</parameter> and
+      <parameter>prepare_time</parameter> can be used to check if the plugin
+      has received this <command>PREPARE TRANSACTION</command> in which case
+      it can apply the rollback, otherwise, it can skip the rollback
operation. The
+      <parameter>gid</parameter> alone is not sufficient because the downstream
+      node can have a prepared transaction with same identifier.

This is in the commit prepared section, but that new text is referring
to "it can apply to the rollback" etc.
Is this deliberate text, or maybe cut/paste error?

==========

2. File: src/backend/replication/pgoutput/pgoutput.c

2.1

@@ -76,6 +78,7 @@ static void
pgoutput_stream_prepare_txn(LogicalDecodingContext *ctx,

static bool publications_valid;
static bool in_streaming;
+static bool in_prepared_txn;

Wondering why this is a module static flag. That makes it looks like
it somehow applies globally to all the functions in this scope, but
really I think this is just a txn property, right?
- e.g. why not use another member of the private TXN data instead? or
- e.g. why not use rbtxn_prepared(txn) macro?

----------

2.2

@@ -404,10 +410,32 @@ pgoutput_startup(LogicalDecodingContext *ctx,
OutputPluginOptions *opt,
 static void
 pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)
 {
+ PGOutputTxnData    *data = MemoryContextAllocZero(ctx->context,
+ sizeof(PGOutputTxnData));
+
+ (void)txn; /* keep compiler quiet */

I guess since now the arg "txn" is being used the added statement to
"keep compiler quiet" is now redundant, so should be removed.

----------

2.3

+static void
+pgoutput_begin(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)
+{
  bool send_replication_origin = txn->origin_id != InvalidRepOriginId;
+ PGOutputTxnData *data = (PGOutputTxnData *) txn->output_plugin_private;

OutputPluginPrepareWrite(ctx, !send_replication_origin);
logicalrep_write_begin(ctx->out, txn);
+ data->sent_begin_txn = true;

I wondered is it worth adding Assert(data); here?

----------

2.4

@@ -422,8 +450,14 @@ static void
 pgoutput_commit_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
  XLogRecPtr commit_lsn)
 {
+ PGOutputTxnData *data = (PGOutputTxnData *) txn->output_plugin_private;
+
  OutputPluginUpdateProgress(ctx);

I wondered is it worthwhile to add Assert(data); here also?

----------

2.5
@@ -422,8 +450,14 @@ static void
 pgoutput_commit_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
  XLogRecPtr commit_lsn)
 {
+ PGOutputTxnData *data = (PGOutputTxnData *) txn->output_plugin_private;
+
  OutputPluginUpdateProgress(ctx);
+ /* skip COMMIT message if nothing was sent */
+ if (!data->sent_begin_txn)
+ return;

Shouldn't this code also be freeing that allocated data? I think you
do free it in similar functions later in this patch.

----------

2.6

@@ -435,10 +469,31 @@ pgoutput_commit_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
 static void
 pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)
 {
+ PGOutputTxnData    *data = MemoryContextAllocZero(ctx->context,
+ sizeof(PGOutputTxnData));
+
+ /*
+ * Don't send BEGIN message here. Instead, postpone it until the first
+ * change. In logical replication, a common scenario is to replicate a set
+ * of tables (instead of all tables) and transactions whose changes were on
+ * table(s) that are not published will produce empty transactions. These
+ * empty transactions will send BEGIN and COMMIT messages to subscribers,
+ * using bandwidth on something with little/no use for logical replication.
+ */
+ data->sent_begin_txn = false;
+ txn->output_plugin_private = data;
+ in_prepared_txn = true;
+}

Apart from setting the in_prepared_txn = true; this is all identical
code to pgoutput_begin_txn so you could consider just delegating to
call that other function to save all the cut/paste data allocation and
big comment. Or maybe this way is better - I am not sure.

----------

2.7

+static void
+pgoutput_begin_prepare(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)
+{
  bool send_replication_origin = txn->origin_id != InvalidRepOriginId;
+ PGOutputTxnData    *data = (PGOutputTxnData *) txn->output_plugin_private;

OutputPluginPrepareWrite(ctx, !send_replication_origin);
logicalrep_write_begin_prepare(ctx->out, txn);
+ data->sent_begin_txn = true;

I wondered is it worth adding Assert(data); here also?

----------

2.8

@@ -453,11 +508,18 @@ static void
 pgoutput_prepare_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
  XLogRecPtr prepare_lsn)
 {
+ PGOutputTxnData    *data = (PGOutputTxnData *) txn->output_plugin_private;
+
  OutputPluginUpdateProgress(ctx);

I wondered is it worth adding Assert(data); here also?

----------

2.9

@@ -465,12 +527,28 @@ pgoutput_prepare_txn(LogicalDecodingContext
*ctx, ReorderBufferTXN *txn,
  */
 static void
 pgoutput_commit_prepared_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
- XLogRecPtr commit_lsn)
+ XLogRecPtr commit_lsn, XLogRecPtr prepare_end_lsn,
+ TimestampTz prepare_time)
 {
+ PGOutputTxnData    *data = (PGOutputTxnData *) txn->output_plugin_private;
+
  OutputPluginUpdateProgress(ctx);
+ /*
+ * skip sending COMMIT PREPARED message if prepared transaction
+ * has not been sent.
+ */
+ if (data && !data->sent_begin_txn)
+ {
+ pfree(data);
+ return;
+ }
+
+ if (data)
+ pfree(data);
  OutputPluginPrepareWrite(ctx, true);

I think this pfree logic might be refactored more simply to just be
done in one place. e.g. like:

if (data)
{
bool skip = !data->sent_begin_txn;
pfree(data);
if (skip)
return;
}

BTW, is it even possible to get in this function with NULL private
data? Perhaps that should be an Assert(data) ?

----------

2.10

@@ -483,8 +561,22 @@ pgoutput_rollback_prepared_txn(LogicalDecodingContext *ctx,
     XLogRecPtr prepare_end_lsn,
     TimestampTz prepare_time)
 {
+ PGOutputTxnData    *data = (PGOutputTxnData *) txn->output_plugin_private;
+
  OutputPluginUpdateProgress(ctx);
+ /*
+ * skip sending COMMIT PREPARED message if prepared transaction
+ * has not been sent.
+ */
+ if (data && !data->sent_begin_txn)
+ {
+ pfree(data);
+ return;
+ }
+
+ if (data)
+ pfree(data);

Same comment as above for refactoring the pfree logic.

----------

2.11

@@ -483,8 +561,22 @@ pgoutput_rollback_prepared_txn(LogicalDecodingContext *ctx,
     XLogRecPtr prepare_end_lsn,
     TimestampTz prepare_time)
 {
+ PGOutputTxnData    *data = (PGOutputTxnData *) txn->output_plugin_private;
+
  OutputPluginUpdateProgress(ctx);
+ /*
+ * skip sending COMMIT PREPARED message if prepared transaction
+ * has not been sent.
+ */
+ if (data && !data->sent_begin_txn)
+ {
+ pfree(data);
+ return;
+ }
+
+ if (data)
+ pfree(data);

Is that comment correct or cut/paste error? Why does it say "COMMIT PREPARED" ?

----------

2.12

@@ -613,6 +705,7 @@ pgoutput_change(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
  Relation relation, ReorderBufferChange *change)
 {
  PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;
  MemoryContext old;

I wondered is it worth adding Assert(txndata); here also?

----------

2.13

@@ -750,6 +852,7 @@ pgoutput_truncate(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
    int nrelations, Relation relations[], ReorderBufferChange *change)
 {
  PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;
  MemoryContext old;

I wondered is it worth adding Assert(txndata); here also?

----------

2.14

@@ -813,11 +925,15 @@ pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
const char *message)
{
PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ PGOutputTxnData *txndata;
TransactionId xid = InvalidTransactionId;

if (!data->messages)
return;

+ if (txn && txn->output_plugin_private)
+ txndata = (PGOutputTxnData *) txn->output_plugin_private;
+
  /*
  * Remember the xid for the message in streaming mode. See
  * pgoutput_change.
@@ -825,6 +941,19 @@ pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
  if (in_streaming)
  xid = txn->xid;
+ /* output BEGIN if we haven't yet, avoid for streaming and
non-transactional messages */
+ if (!in_streaming && transactional)
+ {
+ txndata = (PGOutputTxnData *) txn->output_plugin_private;
+ if (!txndata->sent_begin_txn)
+ {
+ if (!in_prepared_txn)
+ pgoutput_begin(ctx, txn);
+ else
+ pgoutput_begin_prepare(ctx, txn);
+ }
+ }
That code:
+ if (txn && txn->output_plugin_private)
+ txndata = (PGOutputTxnData *) txn->output_plugin_private;
looked misplaced to me.

Shouldn't all that be relocated to be put inside the if block:
+ if (!in_streaming && transactional)

And when you do that maybe the condition can be simplified because you could
Assert(txn);

==========

3. File src/include/replication/pgoutput.h

3.1

@@ -30,4 +30,9 @@ typedef struct PGOutputData
bool two_phase;
} PGOutputData;

+typedef struct PGOutputTxnData
+{
+ bool sent_begin_txn; /* flag indicating whether begin has been sent */
+} PGOutputTxnData;
+

Why is this typedef here? IIUC it is only used inside the pgoutput.c,
so shouldn't it be declared in that file also?

----------

3.2

@@ -30,4 +30,9 @@ typedef struct PGOutputData
bool two_phase;
} PGOutputData;

+typedef struct PGOutputTxnData
+{
+ bool sent_begin_txn; /* flag indicating whether begin has been sent */
+} PGOutputTxnData;
+

That is a new typedef so maybe your patch also should update the
src/tools/pgindent/typedefs.list to name this new typedef.

----------
Kind Regards,
Peter Smith.
Fujitsu Australia

#325tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
In reply to: Peter Smith (#324)
RE: [HACKERS] logical decoding of two-phase transactions

Hi Ajin

The above patch had some changes missing which resulted in some tap
tests failing. Sending an updated patchset. Keeping the patchset
version the same.

Thanks for your patch. I see a problem about Segmentation fault when using it. Please take a look at this.
The steps to reproduce the problem are as follows.

------publisher------
create table test (a int primary key, b varchar);
create publication pub for table test;

------subscriber------
create table test (a int primary key, b varchar);
create subscription sub connection 'dbname=postgres' publication pub with(two_phase=on);

Then, I prepare, commit, rollback transactions and TRUNCATE table in a sql as follows:
-------------
BEGIN;
INSERT INTO test SELECT i, md5(i::text) FROM generate_series(1, 10000) s(i);
PREPARE TRANSACTION 't1';
COMMIT PREPARED 't1';

BEGIN;
INSERT INTO test SELECT i, md5(i::text) FROM generate_series(10001, 20000) s(i);
PREPARE TRANSACTION 't2';
ROLLBACK PREPARED 't2';

TRUNCATE test;
-------------

To make sure the problem produce easily, I looped above operations in my sql file about 10 times, then I can 100% reproduce it and got segmentation fault in publisher log as follows:
-------------
2021-05-18 16:30:56.952 CST [548189] postmaster LOG: server process (PID 548222) was terminated by signal 11: Segmentation fault
2021-05-18 16:30:56.952 CST [548189] postmaster DETAIL: Failed process was running: START_REPLICATION SLOT "sub" LOGICAL 0/0 (proto_version '3', two_phase 'on', publication_names '"pub"')
-------------

Here is the core dump information :
-------------
#0 0x000000000090afe4 in pq_sendstring (buf=buf@entry=0x251ca80, str=0x0) at pqformat.c:199
#1 0x0000000000ab0a2b in logicalrep_write_begin_prepare (out=0x251ca80, txn=txn@entry=0x25346e8) at proto.c:124
#2 0x00007f9528842dd6 in pgoutput_begin_prepare (ctx=ctx@entry=0x2514700, txn=txn@entry=0x25346e8) at pgoutput.c:495
#3 0x00007f9528843f70 in pgoutput_truncate (ctx=0x2514700, txn=0x25346e8, nrelations=1, relations=0x262f678, change=0x25370b8) at pgoutput.c:905
#4 0x0000000000aa57cb in truncate_cb_wrapper (cache=<optimized out>, txn=<optimized out>, nrelations=<optimized out>, relations=<optimized out>, change=<optimized out>)
at logical.c:1103
#5 0x0000000000abf333 in ReorderBufferApplyTruncate (streaming=false, change=0x25370b8, relations=0x262f678, nrelations=1, txn=0x25346e8, rb=0x2516710)
at reorderbuffer.c:1918
#6 ReorderBufferProcessTXN (rb=rb@entry=0x2516710, txn=0x25346e8, commit_lsn=commit_lsn@entry=27517176, snapshot_now=<optimized out>, command_id=command_id@entry=0,
streaming=streaming@entry=false) at reorderbuffer.c:2278
#7 0x0000000000ac0b14 in ReorderBufferReplay (txn=<optimized out>, rb=rb@entry=0x2516710, xid=xid@entry=738, commit_lsn=commit_lsn@entry=27517176,
end_lsn=end_lsn@entry=27517544, commit_time=commit_time@entry=674644388404356, origin_id=0, origin_lsn=0) at reorderbuffer.c:2591
#8 0x0000000000ac1713 in ReorderBufferCommit (rb=0x2516710, xid=xid@entry=738, commit_lsn=27517176, end_lsn=27517544, commit_time=commit_time@entry=674644388404356,
origin_id=origin_id@entry=0, origin_lsn=0) at reorderbuffer.c:2615
#9 0x0000000000a9f702 in DecodeCommit (ctx=ctx@entry=0x2514700, buf=buf@entry=0x7ffdd027c2b0, parsed=parsed@entry=0x7ffdd027c140, xid=xid@entry=738,
two_phase=<optimized out>) at decode.c:742
#10 0x0000000000a9fc6c in DecodeXactOp (ctx=ctx@entry=0x2514700, buf=buf@entry=0x7ffdd027c2b0) at decode.c:278
#11 0x0000000000aa1b75 in LogicalDecodingProcessRecord (ctx=0x2514700, record=0x2514ac0) at decode.c:142
#12 0x0000000000af6db1 in XLogSendLogical () at walsender.c:2876
#13 0x0000000000afb6aa in WalSndLoop (send_data=send_data@entry=0xaf6d49 <XLogSendLogical>) at walsender.c:2306
#14 0x0000000000afbdac in StartLogicalReplication (cmd=cmd@entry=0x24da288) at walsender.c:1206
#15 0x0000000000afd646 in exec_replication_command (
cmd_string=cmd_string@entry=0x2452570 "START_REPLICATION SLOT \"sub\" LOGICAL 0/0 (proto_version '3', two_phase 'on', publication_names '\"pub\"')") at walsender.c:1646
#16 0x0000000000ba3514 in PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7ffdd027c560, dbname=<optimized out>, username=<optimized out>) at postgres.c:4482
#17 0x0000000000a7284a in BackendRun (port=port@entry=0x2477b60) at postmaster.c:4491
#18 0x0000000000a78bba in BackendStartup (port=port@entry=0x2477b60) at postmaster.c:4213
#19 0x0000000000a78ff9 in ServerLoop () at postmaster.c:1745
#20 0x0000000000a7bbdf in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x244bae0) at postmaster.c:1417
#21 0x000000000090dc80 in main (argc=3, argv=0x244bae0) at main.c:209
-------------

I noticed that it called pgoutput_truncate function and pgoutput_begin_prepare function. It seems odd because TRUNCATE is not in a prepared transaction in my case.

I tried to debug this to learn more and found that in pgoutput_truncate function, the value of in_prepared_txn was true. Later, it got a segmentation fault when it tried to get gid in logicalrep_write_begin_prepare function - it has no gid so we got the segmentation fault.

FYI:
I also tested the case in synchronous mode, and it can execute successfully. So, I think the value of in_prepared_txn is sometimes incorrect in asynchronous mode. Maybe there's a better way to get this.

Regards
Tang

#326Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#319)

On Thu, May 13, 2021 at 3:20 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v75*

Review comments for v75-0001-Add-support-for-prepared-transactions-to-built-i:
===============================================================================
1.
-   <term><literal>CREATE_REPLICATION_SLOT</literal> <replaceable
class="parameter">slot_name</replaceable> [
<literal>TEMPORARY</literal> ] { <literal>PHYSICAL</literal> [
<literal>RESERVE_WAL</literal> ] | <literal>LOGICAL</literal>
<replaceable class="parameter">output_plugin</replaceable> [
<literal>EXPORT_SNAPSHOT</literal> |
<literal>NOEXPORT_SNAPSHOT</literal> | <literal>USE_SNAPSHOT</literal>
] }
+   <term><literal>CREATE_REPLICATION_SLOT</literal> <replaceable
class="parameter">slot_name</replaceable> [
<literal>TEMPORARY</literal> ] [ <literal>TWO_PHASE</literal> ] {
<literal>PHYSICAL</literal> [ <literal>RESERVE_WAL</literal> ] |
<literal>LOGICAL</literal> <replaceable
class="parameter">output_plugin</replaceable> [
<literal>EXPORT_SNAPSHOT</literal> |
<literal>NOEXPORT_SNAPSHOT</literal> | <literal>USE_SNAPSHOT</literal>
] }

Can we do some testing of the code related to this in some way? One
random idea could be to change the current subscriber-side code just
for testing purposes to see if this works. Can we enhance and use
pg_recvlogical to test this? It is possible that if you address
comment number 13 below, this can be tested with Create Subscription
command.

2.
-   belong to the same transaction. It also sends changes of large in-progress
-   transactions between a pair of Stream Start and Stream Stop messages. The
-   last stream of such a transaction contains Stream Commit or Stream Abort
-   message.
+   belong to the same transaction. Similarly, all messages between a pair of
+   Begin Prepare and Commit Prepared messages belong to the same transaction.

I think here we need to write Prepare instead of Commit Prepared
because Commit Prepared for a transaction can come at a later point of
time and all the messages in-between won't belong to the same
transaction.

3.
+<!-- ==================== TWO_PHASE Messages ==================== -->
+
+<para>
+The following messages (Begin Prepare, Prepare, Commit Prepared,
Rollback Prepared)
+are available since protocol version 3.
+</para>

I am not sure here marker like "TWO_PHASE Messages" is required. We
don't have any such marker for streaming messages.

4.
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                Timestamp of the prepare transaction.

Isn't it better to write this description as "Prepare timestamp of the
transaction" to match with the similar description of Commit
timestamp. Also, there are similar occurances in the patch at other
places, change those as well.

5.
+<term>Begin Prepare</term>
+<listitem>
+<para>
...
+<varlistentry>
+<term>Int32</term>
+<listitem><para>
+                Xid of the subtransaction (will be same as xid of the
transaction for top-level
+                transactions).

The above description seems wrong to me. It should be Xid of the
transaction as we won't receive Xid of subtransaction in Begin
message. The same applies to the prepare/commit prepared/rollback
prepared transaction messages as well, so change that as well
accordingly.

6.
+<term>Byte1('P')</term>
+<listitem><para>
+                Identifies this message as a two-phase prepare
transaction message.
+</para></listitem>

In all the similar messages, we are using "Identifies the message as
...". I feel it is better to be consistent in this and similar
messages in the patch.

7.
+<varlistentry>
+
+<term>Rollback Prepared</term>
+<listitem>
..
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The LSN of the prepare.
+</para></listitem>

This should be end LSN of the prepared transaction.

8.
+bool
+LookupGXact(const char *gid, XLogRecPtr prepare_end_lsn,
+ TimestampTz origin_prepare_timestamp)
..
..
+ /*
+ * We are neither expecting the collisions of GXACTs (same gid)
+ * between publisher and subscribers nor the apply worker restarts
+ * after prepared xacts,

The second part of the comment ".. nor the apply worker restarts after
prepared xacts .." is no longer true after commit 8bdb1332eb[1]https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8bdb1332eb51837c15a10a972c179b84f654279e. So,
we can remove it.

9.
+ /*
+ * Does the subscription have tables?
+ *
+ * If there were not-READY relations found then we know it does. But if
+ * table_state_no_ready was empty we still need to check again to see
+ * if there are 0 tables.
+ */
+ has_subrels = (list_length(table_states_not_ready) > 0) ||

Typo in comments. /table_state_no_ready/table_state_not_ready

10.
+ if (!twophase)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("unrecognized subscription parameter: \"%s\"", defel->defname)));

errmsg is not aligned properly. Can we make the error message clear,
something like: "cannot change two_phase option"

11.
@@ -69,7 +69,8 @@ parse_subscription_options(List *options,
     char **synchronous_commit,
     bool *refresh,
     bool *binary_given, bool *binary,
-    bool *streaming_given, bool *streaming)
+    bool *streaming_given, bool *streaming,
+    bool *twophase_given, bool *twophase)

This function already has 14 parameters and this patch adds 2 new
ones. Isn't it better to have a struct (ParseSubOptions) for these
parameters? I think that might lead to some code churn but we can have
that as a separate patch on top of which we can create two_pc patch.

12.
* The subscription two_phase commit implementation requires
+ * that replication has passed the initial table
+ * synchronization phase before the two_phase becomes properly
+ * enabled.

Can we slightly modify the starting of this sentence as:"The
subscription option 'two_phase' requires that ..."

13.
@@ -507,7 +558,16 @@ CreateSubscription(CreateSubscriptionStmt *stmt,
bool isTopLevel)
{
Assert(slotname);

- walrcv_create_slot(wrconn, slotname, false,
+ /*
+ * Even if two_phase is set, don't create the slot with
+ * two-phase enabled. Will enable it once all the tables are
+ * synced and ready. This avoids race-conditions like prepared
+ * transactions being skipped due to changes not being applied
+ * due to checks in should_apply_changes_for_rel() when
+ * tablesync for the corresponding tables are in progress. See
+ * comments atop worker.c.
+ */
+ walrcv_create_slot(wrconn, slotname, false, false,

Can't we enable two_phase if copy_data is false? Because in that case,
all relations will be in a READY state. If we do that then we should
also set two_phase state as 'enabled' during createsubscription. I
think we need to be careful to check that connect option is given and
copy_data is false before setting such a state. Now, I guess we may
not be able to optimize this to not set 'enabled' state when the
subscription has no rels.

14.
+ if (options->proto.logical.twophase &&
+ PQserverVersion(conn->streamConn) >= 140000)
+ appendStringInfoString(&cmd, ", two_phase 'on'");
+

We need to check 150000 here but for now, maybe we can add a comment
similar to what you have added in ApplyWorkerMain to avoid forgetting
this change. Probably a similar comment is required pg_dump.c.

15.
@@ -49,7 +49,7 @@ logicalrep_write_begin(StringInfo out, ReorderBufferTXN *txn)

  /* fixed fields */
  pq_sendint64(out, txn->final_lsn);
- pq_sendint64(out, txn->commit_time);
+ pq_sendint64(out, txn->u_op_time.prepare_time);
  pq_sendint32(out, txn->xid);

Why here prepare_time? It should be commit_time. We use prepare_time
in begin_prepare not in begin.

16.
+logicalrep_write_commit_prepared(StringInfo out, ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn)
+{
+ uint8 flags = 0;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_COMMIT_PREPARED);
+
+ /*
+ * This should only ever happen for two-phase commit transactions. In
+ * which case we expect to have a valid GID. Additionally, the transaction
+ * must be prepared. See ReorderBufferFinishPrepared.
+ */
+ Assert(txn->gid != NULL);
+

The second part of the comment ("Additionally, the transaction must be
prepared) is no longer true. Also, we can combine the first two
sentences here and at other places where a similar comment is used.

17.
+ union
+ {
+ TimestampTz commit_time;
+ TimestampTz prepare_time;
+ } u_op_time;

I think it is better to name this union as xact_time or trans_time.

[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8bdb1332eb51837c15a10a972c179b84f654279e

--
With Regards,
Amit Kapila.

#327Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#326)
2 attachment(s)

Please find attached the latest patch set v77*

Differences from v76* are:

* Rebased to HEAD @ yesterday

* v77* addresses most of Amit's recent feedback comments [1]/messages/by-id/CAA4eK1Jz64rwLyB6H7Z_SmEDouJ41KN42=VkVFp6JTpafJFG8Q@mail.gmail.com; I will
reply to that mail separately with the details.

* The v77-003 is temporarily omitted from this patch set. That will be
re-added in v78* early next week.

----
[1]: /messages/by-id/CAA4eK1Jz64rwLyB6H7Z_SmEDouJ41KN42=VkVFp6JTpafJFG8Q@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v77-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v77-0001-Add-support-for-prepared-transactions-to-built-i.patch
v77-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v77-0002-Add-prepare-API-support-for-streaming-transactio.patch
#328Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#326)

On Tue, May 18, 2021 at 9:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, May 13, 2021 at 3:20 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v75*

Review comments for v75-0001-Add-support-for-prepared-transactions-to-built-i:
===============================================================================
1.
-   <term><literal>CREATE_REPLICATION_SLOT</literal> <replaceable
class="parameter">slot_name</replaceable> [
<literal>TEMPORARY</literal> ] { <literal>PHYSICAL</literal> [
<literal>RESERVE_WAL</literal> ] | <literal>LOGICAL</literal>
<replaceable class="parameter">output_plugin</replaceable> [
<literal>EXPORT_SNAPSHOT</literal> |
<literal>NOEXPORT_SNAPSHOT</literal> | <literal>USE_SNAPSHOT</literal>
] }
+   <term><literal>CREATE_REPLICATION_SLOT</literal> <replaceable
class="parameter">slot_name</replaceable> [
<literal>TEMPORARY</literal> ] [ <literal>TWO_PHASE</literal> ] {
<literal>PHYSICAL</literal> [ <literal>RESERVE_WAL</literal> ] |
<literal>LOGICAL</literal> <replaceable
class="parameter">output_plugin</replaceable> [
<literal>EXPORT_SNAPSHOT</literal> |
<literal>NOEXPORT_SNAPSHOT</literal> | <literal>USE_SNAPSHOT</literal>
] }

Can we do some testing of the code related to this in some way? One
random idea could be to change the current subscriber-side code just
for testing purposes to see if this works. Can we enhance and use
pg_recvlogical to test this? It is possible that if you address
comment number 13 below, this can be tested with Create Subscription
command.

TODO

2.
-   belong to the same transaction. It also sends changes of large in-progress
-   transactions between a pair of Stream Start and Stream Stop messages. The
-   last stream of such a transaction contains Stream Commit or Stream Abort
-   message.
+   belong to the same transaction. Similarly, all messages between a pair of
+   Begin Prepare and Commit Prepared messages belong to the same transaction.

I think here we need to write Prepare instead of Commit Prepared
because Commit Prepared for a transaction can come at a later point of
time and all the messages in-between won't belong to the same
transaction.

Fixed in v77-0001

3.
+<!-- ==================== TWO_PHASE Messages ==================== -->
+
+<para>
+The following messages (Begin Prepare, Prepare, Commit Prepared,
Rollback Prepared)
+are available since protocol version 3.
+</para>

I am not sure here marker like "TWO_PHASE Messages" is required. We
don't have any such marker for streaming messages.

Fixed in v77-0001

4.
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                Timestamp of the prepare transaction.

Isn't it better to write this description as "Prepare timestamp of the
transaction" to match with the similar description of Commit
timestamp. Also, there are similar occurances in the patch at other
places, change those as well.

Fixed in v77-0001, v77-0002

5.
+<term>Begin Prepare</term>
+<listitem>
+<para>
...
+<varlistentry>
+<term>Int32</term>
+<listitem><para>
+                Xid of the subtransaction (will be same as xid of the
transaction for top-level
+                transactions).

The above description seems wrong to me. It should be Xid of the
transaction as we won't receive Xid of subtransaction in Begin
message. The same applies to the prepare/commit prepared/rollback
prepared transaction messages as well, so change that as well
accordingly.

Fixed in v77-0001, v77-0002

6.
+<term>Byte1('P')</term>
+<listitem><para>
+                Identifies this message as a two-phase prepare
transaction message.
+</para></listitem>

In all the similar messages, we are using "Identifies the message as
...". I feel it is better to be consistent in this and similar
messages in the patch.

Fixed in v77-0001, v77-0002

7.
+<varlistentry>
+
+<term>Rollback Prepared</term>
+<listitem>
..
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The LSN of the prepare.
+</para></listitem>

This should be end LSN of the prepared transaction.

Fixed in v77-0001

8.
+bool
+LookupGXact(const char *gid, XLogRecPtr prepare_end_lsn,
+ TimestampTz origin_prepare_timestamp)
..
..
+ /*
+ * We are neither expecting the collisions of GXACTs (same gid)
+ * between publisher and subscribers nor the apply worker restarts
+ * after prepared xacts,

The second part of the comment ".. nor the apply worker restarts after
prepared xacts .." is no longer true after commit 8bdb1332eb[1]. So,
we can remove it.

Fixed in v77-0001

9.
+ /*
+ * Does the subscription have tables?
+ *
+ * If there were not-READY relations found then we know it does. But if
+ * table_state_no_ready was empty we still need to check again to see
+ * if there are 0 tables.
+ */
+ has_subrels = (list_length(table_states_not_ready) > 0) ||

Typo in comments. /table_state_no_ready/table_state_not_ready

Fixed in v77-0001

10.
+ if (!twophase)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("unrecognized subscription parameter: \"%s\"", defel->defname)));

errmsg is not aligned properly. Can we make the error message clear,
something like: "cannot change two_phase option"

Fixed in v77-0001.

I fixed the alignment, but did not modify the message text.This
message was already changed in v74 to make it more consistent with
similar errors. Please see Vignesh feedback [1]= /messages/by-id/CALDaNm0u=QGwd7jDAj-4u=7vvPn5rarFjBMCgfiJbDte55CWAA@mail.gmail.com comment #1.

11.
@@ -69,7 +69,8 @@ parse_subscription_options(List *options,
char **synchronous_commit,
bool *refresh,
bool *binary_given, bool *binary,
-    bool *streaming_given, bool *streaming)
+    bool *streaming_given, bool *streaming,
+    bool *twophase_given, bool *twophase)

This function already has 14 parameters and this patch adds 2 new
ones. Isn't it better to have a struct (ParseSubOptions) for these
parameters? I think that might lead to some code churn but we can have
that as a separate patch on top of which we can create two_pc patch.

This same modification is already being addressed in another thread
[2]: /messages/by-id/CALj2ACWEjphPsfpyX9M+RdqmoRwRbWVKMoW7Tx1o+h+oNEs4pQ@mail.gmail.com
needs to be re-based later after the other patch is pushed,

12.
* The subscription two_phase commit implementation requires
+ * that replication has passed the initial table
+ * synchronization phase before the two_phase becomes properly
+ * enabled.

Can we slightly modify the starting of this sentence as:"The
subscription option 'two_phase' requires that ..."

Fixed in v77-0001

13.
@@ -507,7 +558,16 @@ CreateSubscription(CreateSubscriptionStmt *stmt,
bool isTopLevel)
{
Assert(slotname);

- walrcv_create_slot(wrconn, slotname, false,
+ /*
+ * Even if two_phase is set, don't create the slot with
+ * two-phase enabled. Will enable it once all the tables are
+ * synced and ready. This avoids race-conditions like prepared
+ * transactions being skipped due to changes not being applied
+ * due to checks in should_apply_changes_for_rel() when
+ * tablesync for the corresponding tables are in progress. See
+ * comments atop worker.c.
+ */
+ walrcv_create_slot(wrconn, slotname, false, false,

Can't we enable two_phase if copy_data is false? Because in that case,
all relations will be in a READY state. If we do that then we should
also set two_phase state as 'enabled' during createsubscription. I
think we need to be careful to check that connect option is given and
copy_data is false before setting such a state. Now, I guess we may
not be able to optimize this to not set 'enabled' state when the
subscription has no rels.

Fixed in v77-0001

14.
+ if (options->proto.logical.twophase &&
+ PQserverVersion(conn->streamConn) >= 140000)
+ appendStringInfoString(&cmd, ", two_phase 'on'");
+

We need to check 150000 here but for now, maybe we can add a comment
similar to what you have added in ApplyWorkerMain to avoid forgetting
this change. Probably a similar comment is required pg_dump.c.

Fixed in v77-0001

15.
@@ -49,7 +49,7 @@ logicalrep_write_begin(StringInfo out, ReorderBufferTXN *txn)

/* fixed fields */
pq_sendint64(out, txn->final_lsn);
- pq_sendint64(out, txn->commit_time);
+ pq_sendint64(out, txn->u_op_time.prepare_time);
pq_sendint32(out, txn->xid);

Why here prepare_time? It should be commit_time. We use prepare_time
in begin_prepare not in begin.

Fixed in v77-0001

16.
+logicalrep_write_commit_prepared(StringInfo out, ReorderBufferTXN *txn,
+ XLogRecPtr commit_lsn)
+{
+ uint8 flags = 0;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_COMMIT_PREPARED);
+
+ /*
+ * This should only ever happen for two-phase commit transactions. In
+ * which case we expect to have a valid GID. Additionally, the transaction
+ * must be prepared. See ReorderBufferFinishPrepared.
+ */
+ Assert(txn->gid != NULL);
+

The second part of the comment ("Additionally, the transaction must be
prepared) is no longer true. Also, we can combine the first two
sentences here and at other places where a similar comment is used.

Fixed in v77-0001, v77-0002

17.
+ union
+ {
+ TimestampTz commit_time;
+ TimestampTz prepare_time;
+ } u_op_time;

I think it is better to name this union as xact_time or trans_time.

Fixed in v77-0001, v77-0002

--------
[1]: = /messages/by-id/CALDaNm0u=QGwd7jDAj-4u=7vvPn5rarFjBMCgfiJbDte55CWAA@mail.gmail.com
[2]: /messages/by-id/CALj2ACWEjphPsfpyX9M+RdqmoRwRbWVKMoW7Tx1o+h+oNEs4pQ@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

#329Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Peter Smith (#328)
3 attachment(s)

On Fri, May 21, 2021 at 6:43 PM Peter Smith <smithpb2250@gmail.com> wrote:

Fixed in v77-0001, v77-0002

Attaching a new patch-set that rebases the patch, addresses review
comments from Peter as well as a test failure reported by Tang. I've
also added some new test case into patch-2 authored by Tang.

I've addressed the following comments:

On Tue, May 18, 2021 at 6:53 PM Peter Smith <smithpb2250@gmail.com> wrote:

1. File: doc/src/sgml/logicaldecoding.sgml

1.1

@@ -862,11 +862,19 @@ typedef void (*LogicalDecodePrepareCB) (struct
LogicalDecodingContext *ctx,
The required <function>commit_prepared_cb</function> callback is called
whenever a transaction <command>COMMIT PREPARED</command> has
been decoded.
The <parameter>gid</parameter> field, which is part of the
-      <parameter>txn</parameter> parameter, can be used in this callback.
+      <parameter>txn</parameter> parameter, can be used in this callback. The
+      parameters <parameter>prepare_end_lsn</parameter> and
+      <parameter>prepare_time</parameter> can be used to check if the plugin
+      has received this <command>PREPARE TRANSACTION</command> in which case
+      it can apply the rollback, otherwise, it can skip the rollback
operation. The
+      <parameter>gid</parameter> alone is not sufficient because the downstream
+      node can have a prepared transaction with same identifier.

This is in the commit prepared section, but that new text is referring
to "it can apply to the rollback" etc.
Is this deliberate text, or maybe cut/paste error?

==========

Fixed.

2. File: src/backend/replication/pgoutput/pgoutput.c

2.1

@@ -76,6 +78,7 @@ static void
pgoutput_stream_prepare_txn(LogicalDecodingContext *ctx,

static bool publications_valid;
static bool in_streaming;
+static bool in_prepared_txn;

Wondering why this is a module static flag. That makes it looks like
it somehow applies globally to all the functions in this scope, but
really I think this is just a txn property, right?
- e.g. why not use another member of the private TXN data instead? or
- e.g. why not use rbtxn_prepared(txn) macro?

----------

I've removed this flag and used rbtxn_prepared(txn) macro. This also
seems to fix the crash reported by Tang, where it
was trying to send a "BEGIN PREPARE" as part of a non-prepared tx.
I've changed the logic to rely on the prepared flag in
the txn to decide if BEGIN needs to be sent or BEGIN PREPARE needs to be sent.

2.2

@@ -404,10 +410,32 @@ pgoutput_startup(LogicalDecodingContext *ctx,
OutputPluginOptions *opt,
static void
pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)
{
+ PGOutputTxnData    *data = MemoryContextAllocZero(ctx->context,
+ sizeof(PGOutputTxnData));
+
+ (void)txn; /* keep compiler quiet */

I guess since now the arg "txn" is being used the added statement to
"keep compiler quiet" is now redundant, so should be removed.

Removed this.

----------

2.3

+static void
+pgoutput_begin(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)
+{
bool send_replication_origin = txn->origin_id != InvalidRepOriginId;
+ PGOutputTxnData *data = (PGOutputTxnData *) txn->output_plugin_private;

OutputPluginPrepareWrite(ctx, !send_replication_origin);
logicalrep_write_begin(ctx->out, txn);
+ data->sent_begin_txn = true;

I wondered is it worth adding Assert(data); here?

----------

Added.

2.4

@@ -422,8 +450,14 @@ static void
pgoutput_commit_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
XLogRecPtr commit_lsn)
{
+ PGOutputTxnData *data = (PGOutputTxnData *) txn->output_plugin_private;
+
OutputPluginUpdateProgress(ctx);

I wondered is it worthwhile to add Assert(data); here also?

----------

Added.

2.5
@@ -422,8 +450,14 @@ static void
pgoutput_commit_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
XLogRecPtr commit_lsn)
{
+ PGOutputTxnData *data = (PGOutputTxnData *) txn->output_plugin_private;
+
OutputPluginUpdateProgress(ctx);
+ /* skip COMMIT message if nothing was sent */
+ if (!data->sent_begin_txn)
+ return;

Shouldn't this code also be freeing that allocated data? I think you
do free it in similar functions later in this patch.

----------

Modified this.

2.6

@@ -435,10 +469,31 @@ pgoutput_commit_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
static void
pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)
{
+ PGOutputTxnData    *data = MemoryContextAllocZero(ctx->context,
+ sizeof(PGOutputTxnData));
+
+ /*
+ * Don't send BEGIN message here. Instead, postpone it until the first
+ * change. In logical replication, a common scenario is to replicate a set
+ * of tables (instead of all tables) and transactions whose changes were on
+ * table(s) that are not published will produce empty transactions. These
+ * empty transactions will send BEGIN and COMMIT messages to subscribers,
+ * using bandwidth on something with little/no use for logical replication.
+ */
+ data->sent_begin_txn = false;
+ txn->output_plugin_private = data;
+ in_prepared_txn = true;
+}

Apart from setting the in_prepared_txn = true; this is all identical
code to pgoutput_begin_txn so you could consider just delegating to
call that other function to save all the cut/paste data allocation and
big comment. Or maybe this way is better - I am not sure.

----------

Updated this.

2.7

+static void
+pgoutput_begin_prepare(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)
+{
bool send_replication_origin = txn->origin_id != InvalidRepOriginId;
+ PGOutputTxnData    *data = (PGOutputTxnData *) txn->output_plugin_private;

OutputPluginPrepareWrite(ctx, !send_replication_origin);
logicalrep_write_begin_prepare(ctx->out, txn);
+ data->sent_begin_txn = true;

I wondered is it worth adding Assert(data); here also?

----------

Added Assert.

2.8

@@ -453,11 +508,18 @@ static void
pgoutput_prepare_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
XLogRecPtr prepare_lsn)
{
+ PGOutputTxnData    *data = (PGOutputTxnData *) txn->output_plugin_private;
+
OutputPluginUpdateProgress(ctx);

I wondered is it worth adding Assert(data); here also?

----------

Added.

2.9

@@ -465,12 +527,28 @@ pgoutput_prepare_txn(LogicalDecodingContext
*ctx, ReorderBufferTXN *txn,
*/
static void
pgoutput_commit_prepared_txn(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
- XLogRecPtr commit_lsn)
+ XLogRecPtr commit_lsn, XLogRecPtr prepare_end_lsn,
+ TimestampTz prepare_time)
{
+ PGOutputTxnData    *data = (PGOutputTxnData *) txn->output_plugin_private;
+
OutputPluginUpdateProgress(ctx);
+ /*
+ * skip sending COMMIT PREPARED message if prepared transaction
+ * has not been sent.
+ */
+ if (data && !data->sent_begin_txn)
+ {
+ pfree(data);
+ return;
+ }
+
+ if (data)
+ pfree(data);
OutputPluginPrepareWrite(ctx, true);

I think this pfree logic might be refactored more simply to just be
done in one place. e.g. like:

if (data)
{
bool skip = !data->sent_begin_txn;
pfree(data);
if (skip)
return;
}

BTW, is it even possible to get in this function with NULL private
data? Perhaps that should be an Assert(data) ?

----------

Changed the logic as per your suggestion but did not add the Assert
because you can come into this function
with a NULL private data, this can happen as the commit prepared for
the transaction can come after a restart of the
WALSENDER and the previously setup private data is lost. This is only
applicable for commit prepared and rollback prepared.

2.10

@@ -483,8 +561,22 @@ pgoutput_rollback_prepared_txn(LogicalDecodingContext *ctx,
XLogRecPtr prepare_end_lsn,
TimestampTz prepare_time)
{
+ PGOutputTxnData    *data = (PGOutputTxnData *) txn->output_plugin_private;
+
OutputPluginUpdateProgress(ctx);
+ /*
+ * skip sending COMMIT PREPARED message if prepared transaction
+ * has not been sent.
+ */
+ if (data && !data->sent_begin_txn)
+ {
+ pfree(data);
+ return;
+ }
+
+ if (data)
+ pfree(data);

Same comment as above for refactoring the pfree logic.

----------

Refactored.

2.11

@@ -483,8 +561,22 @@ pgoutput_rollback_prepared_txn(LogicalDecodingContext *ctx,
XLogRecPtr prepare_end_lsn,
TimestampTz prepare_time)
{
+ PGOutputTxnData    *data = (PGOutputTxnData *) txn->output_plugin_private;
+
OutputPluginUpdateProgress(ctx);
+ /*
+ * skip sending COMMIT PREPARED message if prepared transaction
+ * has not been sent.
+ */
+ if (data && !data->sent_begin_txn)
+ {
+ pfree(data);
+ return;
+ }
+
+ if (data)
+ pfree(data);

Is that comment correct or cut/paste error? Why does it say "COMMIT PREPARED" ?

----------

Fixed.

2.12

@@ -613,6 +705,7 @@ pgoutput_change(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
Relation relation, ReorderBufferChange *change)
{
PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;
MemoryContext old;

I wondered is it worth adding Assert(txndata); here also?

----------

Added.

2.13

@@ -750,6 +852,7 @@ pgoutput_truncate(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
int nrelations, Relation relations[], ReorderBufferChange *change)
{
PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;
MemoryContext old;

I wondered is it worth adding Assert(txndata); here also?

----------

Added.

2.14

@@ -813,11 +925,15 @@ pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
const char *message)
{
PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ PGOutputTxnData *txndata;
TransactionId xid = InvalidTransactionId;

if (!data->messages)
return;

+ if (txn && txn->output_plugin_private)
+ txndata = (PGOutputTxnData *) txn->output_plugin_private;
+
/*
* Remember the xid for the message in streaming mode. See
* pgoutput_change.
@@ -825,6 +941,19 @@ pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
if (in_streaming)
xid = txn->xid;
+ /* output BEGIN if we haven't yet, avoid for streaming and
non-transactional messages */
+ if (!in_streaming && transactional)
+ {
+ txndata = (PGOutputTxnData *) txn->output_plugin_private;
+ if (!txndata->sent_begin_txn)
+ {
+ if (!in_prepared_txn)
+ pgoutput_begin(ctx, txn);
+ else
+ pgoutput_begin_prepare(ctx, txn);
+ }
+ }
That code:
+ if (txn && txn->output_plugin_private)
+ txndata = (PGOutputTxnData *) txn->output_plugin_private;
looked misplaced to me.

Shouldn't all that be relocated to be put inside the if block:
+ if (!in_streaming && transactional)

And when you do that maybe the condition can be simplified because you could
Assert(txn);

==========

Removed that redundant code but cannot add Assert here as in streaming
and transactional messages, there will be no
output_plugin_private.

3. File src/include/replication/pgoutput.h

3.1

@@ -30,4 +30,9 @@ typedef struct PGOutputData
bool two_phase;
} PGOutputData;

+typedef struct PGOutputTxnData
+{
+ bool sent_begin_txn; /* flag indicating whether begin has been sent */
+} PGOutputTxnData;
+

Why is this typedef here? IIUC it is only used inside the pgoutput.c,
so shouldn't it be declared in that file also?

----------

Changed this accordingly.

3.2

@@ -30,4 +30,9 @@ typedef struct PGOutputData
bool two_phase;
} PGOutputData;

+typedef struct PGOutputTxnData
+{
+ bool sent_begin_txn; /* flag indicating whether begin has been sent */
+} PGOutputTxnData;
+

That is a new typedef so maybe your patch also should update the
src/tools/pgindent/typedefs.list to name this new typedef.

----------

Added.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v78-0003-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v78-0003-Skip-empty-transactions-for-logical-replication.patch
v78-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v78-0001-Add-support-for-prepared-transactions-to-built-i.patch
v78-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v78-0002-Add-prepare-API-support-for-streaming-transactio.patch
#330tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
In reply to: Peter Smith (#328)
2 attachment(s)
RE: [HACKERS] logical decoding of two-phase transactions

13.
@@ -507,7 +558,16 @@ CreateSubscription(CreateSubscriptionStmt *stmt,
bool isTopLevel)
{
Assert(slotname);

- walrcv_create_slot(wrconn, slotname, false,
+ /*
+ * Even if two_phase is set, don't create the slot with
+ * two-phase enabled. Will enable it once all the tables are
+ * synced and ready. This avoids race-conditions like prepared
+ * transactions being skipped due to changes not being applied
+ * due to checks in should_apply_changes_for_rel() when
+ * tablesync for the corresponding tables are in progress. See
+ * comments atop worker.c.
+ */
+ walrcv_create_slot(wrconn, slotname, false, false,

Can't we enable two_phase if copy_data is false? Because in that case,
all relations will be in a READY state. If we do that then we should
also set two_phase state as 'enabled' during createsubscription. I
think we need to be careful to check that connect option is given and
copy_data is false before setting such a state. Now, I guess we may
not be able to optimize this to not set 'enabled' state when the
subscription has no rels.

Fixed in v77-0001

I noticed this modification in v77-0001 and executed "CREATE SUBSCRIPTION ... WITH (two_phase = on, copy_data = false)", but it crashed.
-------------
postgres=# CREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres' PUBLICATION pub WITH(two_phase = on, copy_data = false);
WARNING: relcache reference leak: relation "pg_subscription" not closed
WARNING: snapshot 0x34278d0 still active
NOTICE: created replication slot "sub" on publisher
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!?>
-------------

There are two warnings and a segmentation fault in subscriber log:
-------------
2021-05-24 15:08:32.435 CST [2848572] WARNING: relcache reference leak: relation "pg_subscription" not closed
2021-05-24 15:08:32.435 CST [2848572] WARNING: snapshot 0x32ce8b0 still active
2021-05-24 15:08:33.012 CST [2848555] LOG: server process (PID 2848572) was terminated by signal 11: Segmentation fault
2021-05-24 15:08:33.012 CST [2848555] DETAIL: Failed process was running: CREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres' PUBLICATION pub WITH(two_phase = on, copy_data = false);
-------------

The backtrace about segmentation fault is attached. It happened in table_close function, we got it because "CurrentResourceOwner" was NULL.

I think it was related with the first warning, which reported "relcache reference leak". The backtrace information is attached, too. When updating two-phase state in CreateSubscription function, it released resource owner and set "CurrentResourceOwner" as NULL in CommitTransaction function.

The second warning about "snapshot still active" was also happened in CommitTransaction function. It called AtEOXact_Snapshot function, checked leftover snapshots and reported the warning.
I debugged and found the snapshot was added in function PortalRunUtility by calling PushActiveSnapshot function, the address of "ActiveSnapshot" at this time was same as the address in warning.

In summary, when creating subscription with two_phase = on and copy_data = false, it calls UpdateTwoPhaseState function in CreateSubscription function to set two_phase state as 'enabled', and it checked and released relcache and snapshot too early so the NG happened. I think some change should be made to avoid it. Thought?

FYI
I also tested the new released V78* at [1]/messages/by-id/CAFPTHDab56twVmC+0a=RNcRw4KuyFdqzW0JAcvJdS63n_fRnOQ@mail.gmail.com, the above NG still exists.
[1]: /messages/by-id/CAFPTHDab56twVmC+0a=RNcRw4KuyFdqzW0JAcvJdS63n_fRnOQ@mail.gmail.com

Regards
Tang

Attachments:

backtrace_segmentation_fault.txttext/plain; name=backtrace_segmentation_fault.txt
backtrace_first_warning.txttext/plain; name=backtrace_first_warning.txt
#331vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Ajin Cherian (#329)

On Tue, May 25, 2021 at 8:54 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Fri, May 21, 2021 at 6:43 PM Peter Smith <smithpb2250@gmail.com> wrote:

Fixed in v77-0001, v77-0002

Attaching a new patch-set that rebases the patch, addresses review
comments from Peter as well as a test failure reported by Tang. I've
also added some new test case into patch-2 authored by Tang.

Thanks for the updated patch, few comments:
1) Should "The end LSN of the prepare." be changed to "end LSN of the
prepare transaction."?

--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -7538,6 +7538,13 @@ are available since protocol version 3.
 <varlistentry>
 <term>Int64</term>
 <listitem><para>
+                The end LSN of the prepare.
+</para></listitem>
+</varlistentry>
+<varlistentry>
+
+<term>Int64</term>
+<listitem><para>
2) Should the ";" be "," here?
+++ b/doc/src/sgml/catalogs.sgml
@@ -7639,6 +7639,18 @@ SCRAM-SHA-256$<replaceable>&lt;iteration
count&gt;</replaceable>:<replaceable>&l
      <row>
       <entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>subtwophasestate</structfield> <type>char</type>
+      </para>
+      <para>
+       State code:
+       <literal>d</literal> = two_phase mode was not requested, so is disabled;
+       <literal>p</literal> = two_phase mode was requested, but is
pending enablement;
+       <literal>e</literal> = two_phase mode was requested, and is enabled.
+      </para></entry>
3) Should end_lsn be commit_end_lsn?
+       prepare_data->commit_end_lsn = pq_getmsgint64(in);
+       if (prepare_data->commit_end_lsn == InvalidXLogRecPtr)
                elog(ERROR, "end_lsn is not set in commit prepared message");
+       prepare_data->prepare_time = pq_getmsgint64(in);

4) This change is not required

diff --git a/src/include/replication/pgoutput.h
b/src/include/replication/pgoutput.h
index 0dc460f..93c6731 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -29,5 +29,4 @@ typedef struct PGOutputData
        bool            messages;
        bool            two_phase;
 } PGOutputData;
-
 #endif                                                 /* PGOUTPUT_H */

5) Will the worker receive commit prepared/rollback prepared as we
have skip logic to skip commit prepared / commit rollback in
pgoutput_rollback_prepared_txn and pgoutput_commit_prepared_txn:

+        * It is possible that we haven't received the prepare because
+        * the transaction did not have any changes relevant to this
+        * subscription and was essentially an empty prepare. In which case,
+        * the walsender is optimized to drop the empty transaction and the
+        * accompanying prepare. Silently ignore if we don't find the prepared
+        * transaction.
         */
-       replorigin_session_origin_lsn = prepare_data.end_lsn;
-       replorigin_session_origin_timestamp = prepare_data.commit_time;
+       if (LookupGXact(gid, prepare_data.prepare_end_lsn,
+                                       prepare_data.prepare_time))
+       {

6) I'm not sure if we could add some tests for skip empty prepare
transactions, if possible add few tests.

7) We could add some debug level log messages for the transaction that
will be skipped.

Regards,
Vignesh

#332Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: tanghy.fnst@fujitsu.com (#330)
3 attachment(s)

On Tue, May 25, 2021 at 4:41 PM tanghy.fnst@fujitsu.com
<tanghy.fnst@fujitsu.com> wrote:

Fixed in v77-0001

I noticed this modification in v77-0001 and executed "CREATE SUBSCRIPTION ... WITH (two_phase = on, copy_data = false)", but it crashed.
-------------
postgres=# CREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres' PUBLICATION pub WITH(two_phase = on, copy_data = false);
WARNING: relcache reference leak: relation "pg_subscription" not closed
WARNING: snapshot 0x34278d0 still active
NOTICE: created replication slot "sub" on publisher
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!?>
-------------

There are two warnings and a segmentation fault in subscriber log:
-------------
2021-05-24 15:08:32.435 CST [2848572] WARNING: relcache reference leak: relation "pg_subscription" not closed
2021-05-24 15:08:32.435 CST [2848572] WARNING: snapshot 0x32ce8b0 still active
2021-05-24 15:08:33.012 CST [2848555] LOG: server process (PID 2848572) was terminated by signal 11: Segmentation fault
2021-05-24 15:08:33.012 CST [2848555] DETAIL: Failed process was running: CREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres' PUBLICATION pub WITH(two_phase = on, copy_data = false);
-------------

Hi Tang,
I've attached a patch that fixes this issue. Do test and confirm.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v79-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v79-0002-Add-prepare-API-support-for-streaming-transactio.patch
v79-0003-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v79-0003-Skip-empty-transactions-for-logical-replication.patch
v79-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v79-0001-Add-support-for-prepared-transactions-to-built-i.patch
#333tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
In reply to: Ajin Cherian (#332)
RE: [HACKERS] logical decoding of two-phase transactions

On Wed, May 26, 2021 10:13 PM Ajin Cherian <itsajin@gmail.com> wrote:

I've attached a patch that fixes this issue. Do test and confirm.

Thanks for your patch.
I have tested and confirmed that the issue I reported has been fixed.

Regards
Tang

#334Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: tanghy.fnst@fujitsu.com (#333)

On Thu, May 27, 2021 at 11:20 AM tanghy.fnst@fujitsu.com
<tanghy.fnst@fujitsu.com> wrote:

On Wed, May 26, 2021 10:13 PM Ajin Cherian <itsajin@gmail.com> wrote:

I've attached a patch that fixes this issue. Do test and confirm.

Thanks for your patch.
I have tested and confirmed that the issue I reported has been fixed.

Thanks for confirmation. The problem seemed to be as you reported a
table not closed when a transaction was committed.
This seems to be because the function UpdateTwoPhaseState was
committing a transaction inside the function when the caller of
UpdateTwoPhaseState had
a table open in CreateSubscription. This function was newly included
in the CreateSubscription code, to handle the new use case of
two_phase being enabled on
create subscription if "copy_data = false". I don't think
CreateSubscription required this to be inside a transaction and the
committing of transaction
was only meant for where this function was originally created to be
used in the apply worker code (ApplyWorkerMain()).
So, I removed the committing of the transaction from inside the
function UpdateTwoPhaseState() and instead started and committed the
transaction
prior to and after this function is invoked in the apply worker code.

regards,
Ajin Cherian
Fujitsu Australia

#335Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: vignesh C (#331)
3 attachment(s)

On Wed, May 26, 2021 at 6:53 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, May 25, 2021 at 8:54 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Fri, May 21, 2021 at 6:43 PM Peter Smith <smithpb2250@gmail.com> wrote:

Fixed in v77-0001, v77-0002

Attaching a new patch-set that rebases the patch, addresses review
comments from Peter as well as a test failure reported by Tang. I've
also added some new test case into patch-2 authored by Tang.

Thanks for the updated patch, few comments:
1) Should "The end LSN of the prepare." be changed to "end LSN of the
prepare transaction."?

No, this is the end LSN of the prepare. The prepare consists of multiple LSNs.

2) Should the ";" be "," here?
+++ b/doc/src/sgml/catalogs.sgml
@@ -7639,6 +7639,18 @@ SCRAM-SHA-256$<replaceable>&lt;iteration
count&gt;</replaceable>:<replaceable>&l
<row>
<entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>subtwophasestate</structfield> <type>char</type>
+      </para>
+      <para>
+       State code:
+       <literal>d</literal> = two_phase mode was not requested, so is disabled;
+       <literal>p</literal> = two_phase mode was requested, but is
pending enablement;
+       <literal>e</literal> = two_phase mode was requested, and is enabled.
+      </para></entry>

no, I think the ";" is correct here, connecting multiple parts of the sentence.

3) Should end_lsn be commit_end_lsn?
+       prepare_data->commit_end_lsn = pq_getmsgint64(in);
+       if (prepare_data->commit_end_lsn == InvalidXLogRecPtr)
elog(ERROR, "end_lsn is not set in commit prepared message");
+       prepare_data->prepare_time = pq_getmsgint64(in);

Changed this.

4) This change is not required

diff --git a/src/include/replication/pgoutput.h
b/src/include/replication/pgoutput.h
index 0dc460f..93c6731 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -29,5 +29,4 @@ typedef struct PGOutputData
bool            messages;
bool            two_phase;
} PGOutputData;
-

removed.

#endif /* PGOUTPUT_H */

5) Will the worker receive commit prepared/rollback prepared as we
have skip logic to skip commit prepared / commit rollback in
pgoutput_rollback_prepared_txn and pgoutput_commit_prepared_txn:

+        * It is possible that we haven't received the prepare because
+        * the transaction did not have any changes relevant to this
+        * subscription and was essentially an empty prepare. In which case,
+        * the walsender is optimized to drop the empty transaction and the
+        * accompanying prepare. Silently ignore if we don't find the prepared
+        * transaction.
*/
-       replorigin_session_origin_lsn = prepare_data.end_lsn;
-       replorigin_session_origin_timestamp = prepare_data.commit_time;
+       if (LookupGXact(gid, prepare_data.prepare_end_lsn,
+                                       prepare_data.prepare_time))
+       {

Commit prepared will be skipped if it happens in the same walsender's
lifetime. But if the walsender restarts it no longer
knows about the skipped prepare. In this case walsender will not skip
the commit prepared. Hence, the logic for handling
stray commit prepared in the apply worker.

6) I'm not sure if we could add some tests for skip empty prepare
transactions, if possible add few tests.

I've added a test case using pg_logical_slot_peek_binary_changes() for
empty prepares
have a look.

7) We could add some debug level log messages for the transaction that
will be skipped.

If this is for the test, I was able to add a test without debug messages.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v80-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v80-0001-Add-support-for-prepared-transactions-to-built-i.patch
v80-0003-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v80-0003-Skip-empty-transactions-for-logical-replication.patch
v80-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v80-0002-Add-prepare-API-support-for-streaming-transactio.patch
#336Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Ajin Cherian (#335)
3 attachment(s)

On Fri, May 28, 2021 at 1:44 PM Ajin Cherian <itsajin@gmail.com> wrote:

Sorry, please ignore the previous patch-set. I attached the wrong
files. Here's the correct patch-set.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v80-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v80-0002-Add-prepare-API-support-for-streaming-transactio.patch
v80-0003-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v80-0003-Skip-empty-transactions-for-logical-replication.patch
v80-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v80-0001-Add-support-for-prepared-transactions-to-built-i.patch
#337vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Ajin Cherian (#335)

On Fri, May 28, 2021 at 9:14 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Wed, May 26, 2021 at 6:53 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, May 25, 2021 at 8:54 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Fri, May 21, 2021 at 6:43 PM Peter Smith <smithpb2250@gmail.com> wrote:

Fixed in v77-0001, v77-0002

Attaching a new patch-set that rebases the patch, addresses review
comments from Peter as well as a test failure reported by Tang. I've
also added some new test case into patch-2 authored by Tang.

Thanks for the updated patch, few comments:
1) Should "The end LSN of the prepare." be changed to "end LSN of the
prepare transaction."?

No, this is the end LSN of the prepare. The prepare consists of multiple LSNs.

2) Should the ";" be "," here?
+++ b/doc/src/sgml/catalogs.sgml
@@ -7639,6 +7639,18 @@ SCRAM-SHA-256$<replaceable>&lt;iteration
count&gt;</replaceable>:<replaceable>&l
<row>
<entry role="catalog_table_entry"><para role="column_definition">
+       <structfield>subtwophasestate</structfield> <type>char</type>
+      </para>
+      <para>
+       State code:
+       <literal>d</literal> = two_phase mode was not requested, so is disabled;
+       <literal>p</literal> = two_phase mode was requested, but is
pending enablement;
+       <literal>e</literal> = two_phase mode was requested, and is enabled.
+      </para></entry>

no, I think the ";" is correct here, connecting multiple parts of the sentence.

3) Should end_lsn be commit_end_lsn?
+       prepare_data->commit_end_lsn = pq_getmsgint64(in);
+       if (prepare_data->commit_end_lsn == InvalidXLogRecPtr)
elog(ERROR, "end_lsn is not set in commit prepared message");
+       prepare_data->prepare_time = pq_getmsgint64(in);

Changed this.

4) This change is not required

diff --git a/src/include/replication/pgoutput.h
b/src/include/replication/pgoutput.h
index 0dc460f..93c6731 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -29,5 +29,4 @@ typedef struct PGOutputData
bool            messages;
bool            two_phase;
} PGOutputData;
-

removed.

#endif /* PGOUTPUT_H */

5) Will the worker receive commit prepared/rollback prepared as we
have skip logic to skip commit prepared / commit rollback in
pgoutput_rollback_prepared_txn and pgoutput_commit_prepared_txn:

+        * It is possible that we haven't received the prepare because
+        * the transaction did not have any changes relevant to this
+        * subscription and was essentially an empty prepare. In which case,
+        * the walsender is optimized to drop the empty transaction and the
+        * accompanying prepare. Silently ignore if we don't find the prepared
+        * transaction.
*/
-       replorigin_session_origin_lsn = prepare_data.end_lsn;
-       replorigin_session_origin_timestamp = prepare_data.commit_time;
+       if (LookupGXact(gid, prepare_data.prepare_end_lsn,
+                                       prepare_data.prepare_time))
+       {

Commit prepared will be skipped if it happens in the same walsender's
lifetime. But if the walsender restarts it no longer
knows about the skipped prepare. In this case walsender will not skip
the commit prepared. Hence, the logic for handling
stray commit prepared in the apply worker.

6) I'm not sure if we could add some tests for skip empty prepare
transactions, if possible add few tests.

I've added a test case using pg_logical_slot_peek_binary_changes() for
empty prepares
have a look.

7) We could add some debug level log messages for the transaction that
will be skipped.

If this is for the test, I was able to add a test without debug messages.

The idea here is to include any debug logs which will help in
analyzing any bugs that we might get from an environment where debug
access might not be available.
Thanks for fixing the comments and posting an updated patch.

Regards,
Vignesh

#338Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#334)

On Thu, May 27, 2021 at 8:05 AM Ajin Cherian <itsajin@gmail.com> wrote:

Thanks for confirmation. The problem seemed to be as you reported a
table not closed when a transaction was committed.
This seems to be because the function UpdateTwoPhaseState was
committing a transaction inside the function when the caller of
UpdateTwoPhaseState had
a table open in CreateSubscription. This function was newly included
in the CreateSubscription code, to handle the new use case of
two_phase being enabled on
create subscription if "copy_data = false". I don't think
CreateSubscription required this to be inside a transaction and the
committing of transaction
was only meant for where this function was originally created to be
used in the apply worker code (ApplyWorkerMain()).
So, I removed the committing of the transaction from inside the
function UpdateTwoPhaseState() and instead started and committed the
transaction
prior to and after this function is invoked in the apply worker code.

You have made these changes in 0002 whereas they should be part of 0001.

One minor comment for 0001.
* Special case: if when tables were specified but copy_data is
+ * false then it is safe to enable two_phase up-front because
+ * those tables are already initially READY state. Note, if
+ * the subscription has no tables then enablement cannot be
+ * done here - we must leave the twophase state as PENDING, to
+ * allow ALTER SUBSCRIPTION ... REFRESH PUBLICATION to work.

Can we slightly modify this comment as: "Note that if tables were
specified but copy_data is false then it is safe to enable two_phase
up-front because those tables are already initially READY state. When
the subscription has no tables, we leave the twophase state as
PENDING, to allow ALTER SUBSCRIPTION ... REFRESH PUBLICATION to work."

Also, I don't see any test after you enable this special case. Is it
covered by existing tests, if not then let's try to add a test for
this?

--
With Regards,
Amit Kapila.

#339Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#338)
3 attachment(s)

On Fri, May 28, 2021 at 4:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Thu, May 27, 2021 at 8:05 AM Ajin Cherian <itsajin@gmail.com> wrote:

Thanks for confirmation. The problem seemed to be as you reported a
table not closed when a transaction was committed.
This seems to be because the function UpdateTwoPhaseState was
committing a transaction inside the function when the caller of
UpdateTwoPhaseState had
a table open in CreateSubscription. This function was newly included
in the CreateSubscription code, to handle the new use case of
two_phase being enabled on
create subscription if "copy_data = false". I don't think
CreateSubscription required this to be inside a transaction and the
committing of transaction
was only meant for where this function was originally created to be
used in the apply worker code (ApplyWorkerMain()).
So, I removed the committing of the transaction from inside the
function UpdateTwoPhaseState() and instead started and committed the
transaction
prior to and after this function is invoked in the apply worker code.

You have made these changes in 0002 whereas they should be part of 0001.

One minor comment for 0001.
* Special case: if when tables were specified but copy_data is
+ * false then it is safe to enable two_phase up-front because
+ * those tables are already initially READY state. Note, if
+ * the subscription has no tables then enablement cannot be
+ * done here - we must leave the twophase state as PENDING, to
+ * allow ALTER SUBSCRIPTION ... REFRESH PUBLICATION to work.

Can we slightly modify this comment as: "Note that if tables were
specified but copy_data is false then it is safe to enable two_phase
up-front because those tables are already initially READY state. When
the subscription has no tables, we leave the twophase state as
PENDING, to allow ALTER SUBSCRIPTION ... REFRESH PUBLICATION to work."

Created v81 - rebased to head and I have corrected the patch-set such
that the fix as well as Tang's test cases are now part of
patch-1. Also added this above minor comment update.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v81-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v81-0001-Add-support-for-prepared-transactions-to-built-i.patch
v81-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v81-0002-Add-prepare-API-support-for-streaming-transactio.patch
v81-0003-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v81-0003-Skip-empty-transactions-for-logical-replication.patch
#340Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#338)

On Fri, May 28, 2021 at 11:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

One minor comment for 0001.
* Special case: if when tables were specified but copy_data is
+ * false then it is safe to enable two_phase up-front because
+ * those tables are already initially READY state. Note, if
+ * the subscription has no tables then enablement cannot be
+ * done here - we must leave the twophase state as PENDING, to
+ * allow ALTER SUBSCRIPTION ... REFRESH PUBLICATION to work.

Can we slightly modify this comment as: "Note that if tables were
specified but copy_data is false then it is safe to enable two_phase
up-front because those tables are already initially READY state. When
the subscription has no tables, we leave the twophase state as
PENDING, to allow ALTER SUBSCRIPTION ... REFRESH PUBLICATION to work."

Also, I don't see any test after you enable this special case. Is it
covered by existing tests, if not then let's try to add a test for
this?

I see that Ajin's latest patch has addressed the other comments except
for the above test case suggestion. I have again reviewed the first
patch and have some comments.

Comments on v81-0001-Add-support-for-prepared-transactions-to-built-i
============================================================================
1.
<para>
        The logical replication solution that builds distributed two
phase commit
        using this feature can deadlock if the prepared transaction has locked
-       [user] catalog tables exclusively. They need to inform users to not have
-       locks on catalog tables (via explicit <command>LOCK</command>
command) in
-       such transactions.
+       [user] catalog tables exclusively. To avoid this users must refrain from
+       having locks on catalog tables (via explicit
<command>LOCK</command> command)
+       in such transactions.
       </para>

This change doesn't belong to this patch. I see the proposed text
could be considered as an improvement but still we can do this
separately. We are already trying to improve things in this regard in
the thread [1]/messages/by-id/20210222222847.tpnb6eg3yiykzpky@alap3.anarazel.de, so you can propose this change there.

2.
+<varlistentry>
+<term>Byte1('K')</term>
+<listitem><para>
+                Identifies the message as the commit of a two-phase
transaction message.
+</para></listitem>
+</varlistentry>
+
+<varlistentry>
+<term>Int8</term>
+<listitem><para>
+                Flags; currently unused (must be 0).
+</para></listitem>
+</varlistentry>
+
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The LSN of the commit.
+</para></listitem>
+</varlistentry>
+
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The end LSN of the commit transaction.
+</para></listitem>
+</varlistentry>

Can we change the description of LSN's as "The LSN of the commit
prepared." and "The end LSN of the commit prepared transaction."
respectively? This will make their description different from regular
commit and I think that defines them better.

3.
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The end LSN of the rollback transaction.
+</para></listitem>
+</varlistentry>

Similar to above, can we change the description here as: "The end LSN
of the rollback prepared transaction."?

4.
+ * The exception to this restriction is when copy_data =
+ * false, because when copy_data is false the tablesync will
+ * start already in READY state and will exit directly without
+ * doing anything which could interfere with the apply
+ * worker's message handling.
+ *
+ * For more details see comments atop worker.c.
+ */
+ if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && copy_data)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed
when two_phase is enabled"),
+ errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false"
+ ", or use DROP/CREATE SUBSCRIPTION.")));

The above comment is a bit unclear because it seems you are saying
there is some problem even when copy_data is false. Are you missing
'not' after 'could' in the comment?

5.
 XXX Now, this can even lead to a deadlock if the prepare
  * transaction is waiting to get it logically replicated for
- * distributed 2PC. Currently, we don't have an in-core
- * implementation of prepares for distributed 2PC but some
- * out-of-core logical replication solution can have such an
- * implementation. They need to inform users to not have locks
- * on catalog tables in such transactions.
+ * distributed 2PC. This can be avoided by disallowing to
+ * prepare transactions that have locked [user] catalog tables
+ * exclusively.

Can we slightly modify this part of the comment as: "This can be
avoided by disallowing to prepare transactions that have locked [user]
catalog tables exclusively but as of now we ask users not to do such
operation"?

6.
+AllTablesyncsReady(void)
+{
+ bool found_busy = false;
+ bool started_tx = false;
+ bool has_subrels = false;
+
+ /* We need up-to-date sync state info for subscription tables here. */
+ has_subrels = FetchTableStates(&started_tx);
+
+ found_busy = list_length(table_states_not_ready) > 0;
+
+ if (started_tx)
+ {
+ CommitTransactionCommand();
+ pgstat_report_stat(false);
+ }
+
+ /*
+ * When there are no tables, then return false.
+ * When no tablesyncs are busy, then all are READY
+ */
+ return has_subrels && !found_busy;
+}

Do we really need found_busy variable in above function. Can't we
change the return as (has_subrels) && (table_states_not_ready != NIL)?
If so, then change the comments above return.

7.
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ */
+static bool
+FetchTableStates(bool *started_tx)

Can we update comments indicating that if this function starts the
transaction then the caller is responsible to commit it?

8.
(errmsg("logical replication apply worker for subscription \"%s\" will
restart so two_phase can be enabled",
+ MySubscription->name)));

Can we slightly change the message as: "logical replication apply
worker for subscription \"%s\" will restart so that two_phase can be
enabled"?

9.
+void
+UpdateTwoPhaseState(Oid suboid, char new_state)
{
..
+ /* And update/set two_phase ENABLED */
+ values[Anum_pg_subscription_subtwophasestate - 1] = CharGetDatum(new_state);
+ replaces[Anum_pg_subscription_subtwophasestate - 1] = true;
..
}

The above comment seems wrong to me as we are updating the state as
passed by the caller.

[1]: /messages/by-id/20210222222847.tpnb6eg3yiykzpky@alap3.anarazel.de

--
With Regards,
Amit Kapila.

#341Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#340)
3 attachment(s)

Please find attached the latest patch set v82*

Differences from v81* are:

* Rebased to HEAD @ yesterday

* v82 addresses all of Amit's feedback comments from [1]/messages/by-id/CAA4eK1Jd9sqWtt5kEJZL1ehJB2y_DFnvDjY9vJ51k8Wq6XWVyw@mail.gmail.com; I will reply
to that mail separately with any details.

----
[1]: /messages/by-id/CAA4eK1Jd9sqWtt5kEJZL1ehJB2y_DFnvDjY9vJ51k8Wq6XWVyw@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v82-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v82-0001-Add-support-for-prepared-transactions-to-built-i.patch
v82-0003-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v82-0003-Skip-empty-transactions-for-logical-replication.patch
v82-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v82-0002-Add-prepare-API-support-for-streaming-transactio.patch
#342Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#340)

On Mon, May 31, 2021 at 9:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, May 28, 2021 at 11:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

One minor comment for 0001.
* Special case: if when tables were specified but copy_data is
+ * false then it is safe to enable two_phase up-front because
+ * those tables are already initially READY state. Note, if
+ * the subscription has no tables then enablement cannot be
+ * done here - we must leave the twophase state as PENDING, to
+ * allow ALTER SUBSCRIPTION ... REFRESH PUBLICATION to work.

Can we slightly modify this comment as: "Note that if tables were
specified but copy_data is false then it is safe to enable two_phase
up-front because those tables are already initially READY state. When
the subscription has no tables, we leave the twophase state as
PENDING, to allow ALTER SUBSCRIPTION ... REFRESH PUBLICATION to work."

Also, I don't see any test after you enable this special case. Is it
covered by existing tests, if not then let's try to add a test for
this?

I see that Ajin's latest patch has addressed the other comments except
for the above test case suggestion.

Yes, this is a known pending task.

I have again reviewed the first
patch and have some comments.

Comments on v81-0001-Add-support-for-prepared-transactions-to-built-i
============================================================================
1.
<para>
The logical replication solution that builds distributed two
phase commit
using this feature can deadlock if the prepared transaction has locked
-       [user] catalog tables exclusively. They need to inform users to not have
-       locks on catalog tables (via explicit <command>LOCK</command>
command) in
-       such transactions.
+       [user] catalog tables exclusively. To avoid this users must refrain from
+       having locks on catalog tables (via explicit
<command>LOCK</command> command)
+       in such transactions.
</para>

This change doesn't belong to this patch. I see the proposed text
could be considered as an improvement but still we can do this
separately. We are already trying to improve things in this regard in
the thread [1], so you can propose this change there.

OK. This change has been removed in v82, and a patch posted to other
thread here [1]/messages/by-id/CAHut+PuTjTp_WERO=3Ybp8snTgDpiZeNaxzZhN8ky8XMo4KFVQ@mail.gmail.com

2.
+<varlistentry>
+<term>Byte1('K')</term>
+<listitem><para>
+                Identifies the message as the commit of a two-phase
transaction message.
+</para></listitem>
+</varlistentry>
+
+<varlistentry>
+<term>Int8</term>
+<listitem><para>
+                Flags; currently unused (must be 0).
+</para></listitem>
+</varlistentry>
+
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The LSN of the commit.
+</para></listitem>
+</varlistentry>
+
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The end LSN of the commit transaction.
+</para></listitem>
+</varlistentry>

Can we change the description of LSN's as "The LSN of the commit
prepared." and "The end LSN of the commit prepared transaction."
respectively? This will make their description different from regular
commit and I think that defines them better.

3.
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The end LSN of the rollback transaction.
+</para></listitem>
+</varlistentry>

Similar to above, can we change the description here as: "The end LSN
of the rollback prepared transaction."?

4.
+ * The exception to this restriction is when copy_data =
+ * false, because when copy_data is false the tablesync will
+ * start already in READY state and will exit directly without
+ * doing anything which could interfere with the apply
+ * worker's message handling.
+ *
+ * For more details see comments atop worker.c.
+ */
+ if (sub->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED && copy_data)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("ALTER SUBSCRIPTION ... REFRESH with copy_data is not allowed
when two_phase is enabled"),
+ errhint("Use ALTER SUBSCRIPTION ... REFRESH with copy_data = false"
+ ", or use DROP/CREATE SUBSCRIPTION.")));

The above comment is a bit unclear because it seems you are saying
there is some problem even when copy_data is false. Are you missing
'not' after 'could' in the comment?

5.
XXX Now, this can even lead to a deadlock if the prepare
* transaction is waiting to get it logically replicated for
- * distributed 2PC. Currently, we don't have an in-core
- * implementation of prepares for distributed 2PC but some
- * out-of-core logical replication solution can have such an
- * implementation. They need to inform users to not have locks
- * on catalog tables in such transactions.
+ * distributed 2PC. This can be avoided by disallowing to
+ * prepare transactions that have locked [user] catalog tables
+ * exclusively.

Can we slightly modify this part of the comment as: "This can be
avoided by disallowing to prepare transactions that have locked [user]
catalog tables exclusively but as of now we ask users not to do such
operation"?

6.
+AllTablesyncsReady(void)
+{
+ bool found_busy = false;
+ bool started_tx = false;
+ bool has_subrels = false;
+
+ /* We need up-to-date sync state info for subscription tables here. */
+ has_subrels = FetchTableStates(&started_tx);
+
+ found_busy = list_length(table_states_not_ready) > 0;
+
+ if (started_tx)
+ {
+ CommitTransactionCommand();
+ pgstat_report_stat(false);
+ }
+
+ /*
+ * When there are no tables, then return false.
+ * When no tablesyncs are busy, then all are READY
+ */
+ return has_subrels && !found_busy;
+}

Do we really need found_busy variable in above function. Can't we
change the return as (has_subrels) && (table_states_not_ready != NIL)?
If so, then change the comments above return.

7.
+/*
+ * Common code to fetch the up-to-date sync state info into the static lists.
+ *
+ * Returns true if subscription has 1 or more tables, else false.
+ */
+static bool
+FetchTableStates(bool *started_tx)

Can we update comments indicating that if this function starts the
transaction then the caller is responsible to commit it?

8.
(errmsg("logical replication apply worker for subscription \"%s\" will
restart so two_phase can be enabled",
+ MySubscription->name)));

Can we slightly change the message as: "logical replication apply
worker for subscription \"%s\" will restart so that two_phase can be
enabled"?

9.
+void
+UpdateTwoPhaseState(Oid suboid, char new_state)
{
..
+ /* And update/set two_phase ENABLED */
+ values[Anum_pg_subscription_subtwophasestate - 1] = CharGetDatum(new_state);
+ replaces[Anum_pg_subscription_subtwophasestate - 1] = true;
..
}

The above comment seems wrong to me as we are updating the state as
passed by the caller.

All the above reported issues 2-9 are addressed in the latest 2PC patch set v82

------
[1]: /messages/by-id/CAHut+PuTjTp_WERO=3Ybp8snTgDpiZeNaxzZhN8ky8XMo4KFVQ@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

#343Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#341)

On Wed, Jun 2, 2021 at 4:34 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v82*

Few comments on 0001:
====================
1.
+ /*
+ * BeginTransactionBlock is necessary to balance the EndTransactionBlock
+ * called within the PrepareTransactionBlock below.
+ */
+ BeginTransactionBlock();
+ CommitTransactionCommand();
+
+ /*
+ * Update origin state so we can restart streaming from correct position
+ * in case of crash.
+ */
+ replorigin_session_origin_lsn = prepare_data.end_lsn;
+ replorigin_session_origin_timestamp = prepare_data.prepare_time;
+
+ PrepareTransactionBlock(gid);
+ CommitTransactionCommand();

Here, the call to CommitTransactionCommand() twice looks a bit odd.
Before the first call, can we write a comment like "This is to
complete the Begin command started by the previous call"?

2.
@@ -85,11 +85,16 @@ typedef struct LogicalDecodingContext
bool streaming;

  /*
- * Does the output plugin support two-phase decoding, and is it enabled?
+ * Does the output plugin support two-phase decoding.
  */
  bool twophase;
  /*
+ * Is two-phase option given by output plugin?
+ */
+ bool twophase_opt_given;
+
+ /*
  * State for writing output.

I think we can write few comments as to why we need a separate
twophase parameter here? The description of twophase_opt_given can be
changed to: "Is two-phase option given by output plugin? This is to
allow output plugins to enable two_phase at the start of streaming. We
can't rely on twophase parameter that tells whether the plugin
provides all the necessary two_phase APIs for this purpose." Feel free
to add more to it.

3.
@@ -432,10 +432,19 @@ CreateInitDecodingContext(const char *plugin,
MemoryContextSwitchTo(old_context);

  /*
- * We allow decoding of prepared transactions iff the two_phase option is
- * enabled at the time of slot creation.
+ * We allow decoding of prepared transactions when the two_phase is
+ * enabled at the time of slot creation, or when the two_phase option is
+ * given at the streaming start.
  */
- ctx->twophase &= MyReplicationSlot->data.two_phase;
+ ctx->twophase &= (ctx->twophase_opt_given || slot->data.two_phase);
+
+ /* Mark slot to allow two_phase decoding if not already marked */
+ if (ctx->twophase && !slot->data.two_phase)
+ {
+ slot->data.two_phase = true;
+ ReplicationSlotMarkDirty();
+ ReplicationSlotSave();
+ }

Why do we need to change this during CreateInitDecodingContext which
is called at create_slot time? At that time, we don't need to consider
any options and there is no need to toggle slot's two_phase value.

4.
- /* Binary mode and streaming are only supported in v14 and higher */
+ /*
+ * Binary, streaming, and two_phase are only supported in v14 and
+ * higher
+ */

We can say v15 for two_phase.

5.
-#define LOGICALREP_PROTO_MAX_VERSION_NUM LOGICALREP_PROTO_STREAM_VERSION_NUM
+#define LOGICALREP_PROTO_TWOPHASE_VERSION_NUM 3
+#define LOGICALREP_PROTO_MAX_VERSION_NUM 3

Isn't it better to define LOGICALREP_PROTO_MAX_VERSION_NUM as
LOGICALREP_PROTO_TWOPHASE_VERSION_NUM instead of specifying directly
the number?

6.
+/* Commit (and abort) information */
typedef struct LogicalRepCommitData
{
XLogRecPtr commit_lsn;
@@ -122,6 +132,48 @@ typedef struct LogicalRepCommitData
TimestampTz committime;
} LogicalRepCommitData;

Is there a reason for the above comment addition? If so, how is it
related to this patch?

7.
+++ b/src/test/subscription/t/021_twophase.pl
@@ -0,0 +1,299 @@
+# logical replication of 2PC test
+use strict;
+use warnings;
+use PostgresNode;
+use TestLib;

In the nearby test files, we have Copyright notice like "# Copyright
(c) 2021, PostgreSQL Global Development Group". We should add one to
the new test files in this patch as well.

8.
+# Also wait for two-phase to be enabled
+my $twophase_query =
+ "SELECT count(1) = 0 FROM pg_subscription WHERE subtwophasestate NOT
IN ('e');";
+$node_subscriber->poll_query_until('postgres', $twophase_query)
+  or die "Timed out while waiting for subscriber to enable twophase";

Isn't it better to write this query as: "SELECT count(1) = 1 FROM
pg_subscription WHERE subtwophasestate ='e';"? It looks a bit odd to
use the NOT IN operator here. Similarly, change the same query used at
another place in the patch.

9.
+# check that transaction is in prepared state on subscriber
+my $result = $node_subscriber->safe_psql('postgres', "SELECT count(*)
FROM pg_prepared_xacts;");
+is($result, qq(1), 'transaction is prepared on subscriber');
+
+# Wait for the statistics to be updated
+$node_publisher->poll_query_until(
+ 'postgres', qq[
+ SELECT count(slot_name) >= 1 FROM pg_stat_replication_slots
+ WHERE slot_name = 'tap_sub'
+ AND total_txns > 0 AND total_bytes > 0;
+]) or die "Timed out while waiting for statistics to be updated";

I don't see the need to check for stats in this test. If we really
want to test stats then we can add a separate test in
contrib\test_decoding\sql\stats but I suggest leaving it. Please do
the same for other stats tests in the patch.

10. I think you missed to update LogicalRepRollbackPreparedTxnData in
typedefs.list.

--
With Regards,
Amit Kapila.

#344Greg Nancarrow
Greg Nancarrow
gregn4422@gmail.com
In reply to: Peter Smith (#341)

On Wed, Jun 2, 2021 at 9:04 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v82*

Some suggested changes to the 0001 patch comments (and note also the
typo "doumentation"):
diff of before and after follows:

8c8
< built-in logical replication, we need to do the below things:
---

built-in logical replication, we need to do the following things:

16,17c16,17
< * Add a new SUBSCRIPTION option "two_phase" to allow users to enable it.
< We enable the two_phase once the initial data sync is over.
---

* Add a new SUBSCRIPTION option "two_phase" to allow users to enable two-phase
transactions. We enable the two_phase once the initial data sync is over.

23c23
< * Adds new subscription TAP tests, and new subscription.sql regression tests.
---

* Add new subscription TAP tests, and new subscription.sql regression tests.

25c25
< * Updates PG doumentation.
---

* Update PG documentation.

33c33
< * Prepare API for in-progress transactions is not supported.
---

* Prepare API for in-progress transactions.

Regards,
Greg Nancarrow
Fujitsu Australia

#345Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Greg Nancarrow (#344)
3 attachment(s)

Please find attached the latest patch set v83*

Differences from v82* are:

* Rebased to HEAD @ yesterday. This was necessary because some recent
HEAD pushes broke the v82.

* Adds a 2PC copy_data=false test case for [1]/messages/by-id/CAA4eK1K7qhqigORdEgqFTOPfj4r2+jV-uLc4-RCtgyDZwvbF8w@mail.gmail.com;

* Addresses most of Amit's recent feedback comments from [2]/messages/by-id/CAA4eK1+8L8h9qUQ6sS48EY0osfN7zs=ZPqR6sE4eQxFhgwBxRw@mail.gmail.com; I will
reply to that mail separately with the details.

* Addresses Greg's feedback [3]/messages/by-id/CAJcOf-cvn4EpSo4cD_9Awop72roKL1vnMtpURn1FnXv+gX5VPA@mail.gmail.com about the patch 0001 commit comment

----
[1]: /messages/by-id/CAA4eK1K7qhqigORdEgqFTOPfj4r2+jV-uLc4-RCtgyDZwvbF8w@mail.gmail.com
[2]: /messages/by-id/CAA4eK1+8L8h9qUQ6sS48EY0osfN7zs=ZPqR6sE4eQxFhgwBxRw@mail.gmail.com
[3]: /messages/by-id/CAJcOf-cvn4EpSo4cD_9Awop72roKL1vnMtpURn1FnXv+gX5VPA@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v83-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v83-0001-Add-support-for-prepared-transactions-to-built-i.patch
v83-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v83-0002-Add-prepare-API-support-for-streaming-transactio.patch
v83-0003-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v83-0003-Skip-empty-transactions-for-logical-replication.patch
#346Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#343)

On Thu, Jun 3, 2021 at 7:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Jun 2, 2021 at 4:34 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v82*

Few comments on 0001:
====================
1.
+ /*
+ * BeginTransactionBlock is necessary to balance the EndTransactionBlock
+ * called within the PrepareTransactionBlock below.
+ */
+ BeginTransactionBlock();
+ CommitTransactionCommand();
+
+ /*
+ * Update origin state so we can restart streaming from correct position
+ * in case of crash.
+ */
+ replorigin_session_origin_lsn = prepare_data.end_lsn;
+ replorigin_session_origin_timestamp = prepare_data.prepare_time;
+
+ PrepareTransactionBlock(gid);
+ CommitTransactionCommand();

Here, the call to CommitTransactionCommand() twice looks a bit odd.
Before the first call, can we write a comment like "This is to
complete the Begin command started by the previous call"?

Fixed in v83-0001 and v83-0002

2.
@@ -85,11 +85,16 @@ typedef struct LogicalDecodingContext
bool streaming;

/*
- * Does the output plugin support two-phase decoding, and is it enabled?
+ * Does the output plugin support two-phase decoding.
*/
bool twophase;
/*
+ * Is two-phase option given by output plugin?
+ */
+ bool twophase_opt_given;
+
+ /*
* State for writing output.

I think we can write few comments as to why we need a separate
twophase parameter here? The description of twophase_opt_given can be
changed to: "Is two-phase option given by output plugin? This is to
allow output plugins to enable two_phase at the start of streaming. We
can't rely on twophase parameter that tells whether the plugin
provides all the necessary two_phase APIs for this purpose." Feel free
to add more to it.

TODO

3.
@@ -432,10 +432,19 @@ CreateInitDecodingContext(const char *plugin,
MemoryContextSwitchTo(old_context);

/*
- * We allow decoding of prepared transactions iff the two_phase option is
- * enabled at the time of slot creation.
+ * We allow decoding of prepared transactions when the two_phase is
+ * enabled at the time of slot creation, or when the two_phase option is
+ * given at the streaming start.
*/
- ctx->twophase &= MyReplicationSlot->data.two_phase;
+ ctx->twophase &= (ctx->twophase_opt_given || slot->data.two_phase);
+
+ /* Mark slot to allow two_phase decoding if not already marked */
+ if (ctx->twophase && !slot->data.two_phase)
+ {
+ slot->data.two_phase = true;
+ ReplicationSlotMarkDirty();
+ ReplicationSlotSave();
+ }

Why do we need to change this during CreateInitDecodingContext which
is called at create_slot time? At that time, we don't need to consider
any options and there is no need to toggle slot's two_phase value.

TODO

4.
- /* Binary mode and streaming are only supported in v14 and higher */
+ /*
+ * Binary, streaming, and two_phase are only supported in v14 and
+ * higher
+ */

We can say v15 for two_phase.

Fixed in v83-0001

5.
-#define LOGICALREP_PROTO_MAX_VERSION_NUM LOGICALREP_PROTO_STREAM_VERSION_NUM
+#define LOGICALREP_PROTO_TWOPHASE_VERSION_NUM 3
+#define LOGICALREP_PROTO_MAX_VERSION_NUM 3

Isn't it better to define LOGICALREP_PROTO_MAX_VERSION_NUM as
LOGICALREP_PROTO_TWOPHASE_VERSION_NUM instead of specifying directly
the number?

Fixed in v83-0001

6.
+/* Commit (and abort) information */
typedef struct LogicalRepCommitData
{
XLogRecPtr commit_lsn;
@@ -122,6 +132,48 @@ typedef struct LogicalRepCommitData
TimestampTz committime;
} LogicalRepCommitData;

Is there a reason for the above comment addition? If so, how is it
related to this patch?

The LogicalRepCommitData is used by the 0002 patch and during
implementation it was not clear what was this struct, so I added the
missing comment (all other nearby typedefs except this one were
commented). But it is not strictly related to anything in patch 0001
so I have moved this change into the v83-0002 patch.

7.
+++ b/src/test/subscription/t/021_twophase.pl
@@ -0,0 +1,299 @@
+# logical replication of 2PC test
+use strict;
+use warnings;
+use PostgresNode;
+use TestLib;

In the nearby test files, we have Copyright notice like "# Copyright
(c) 2021, PostgreSQL Global Development Group". We should add one to
the new test files in this patch as well.

Fixed in v83-0001 and v83-0002

8.
+# Also wait for two-phase to be enabled
+my $twophase_query =
+ "SELECT count(1) = 0 FROM pg_subscription WHERE subtwophasestate NOT
IN ('e');";
+$node_subscriber->poll_query_until('postgres', $twophase_query)
+  or die "Timed out while waiting for subscriber to enable twophase";

Isn't it better to write this query as: "SELECT count(1) = 1 FROM
pg_subscription WHERE subtwophasestate ='e';"? It looks a bit odd to
use the NOT IN operator here. Similarly, change the same query used at
another place in the patch.

Not changed. This way keeps all the test parts more independent of
each other doesn’t it? E.g. without NOT, if there were other
subscriptions in the same test file then the expected result of ‘e’
may be 1 or 2 or 3 or whatever. Using NOT means you don't have to
worry about any other test part. I think we had been bitten by similar
state checks before which is why it was written like this.

9.
+# check that transaction is in prepared state on subscriber
+my $result = $node_subscriber->safe_psql('postgres', "SELECT count(*)
FROM pg_prepared_xacts;");
+is($result, qq(1), 'transaction is prepared on subscriber');
+
+# Wait for the statistics to be updated
+$node_publisher->poll_query_until(
+ 'postgres', qq[
+ SELECT count(slot_name) >= 1 FROM pg_stat_replication_slots
+ WHERE slot_name = 'tap_sub'
+ AND total_txns > 0 AND total_bytes > 0;
+]) or die "Timed out while waiting for statistics to be updated";

I don't see the need to check for stats in this test. If we really
want to test stats then we can add a separate test in
contrib\test_decoding\sql\stats but I suggest leaving it. Please do
the same for other stats tests in the patch.

Removed statistics tests from v83-0001 and v83-0002

10. I think you missed to update LogicalRepRollbackPreparedTxnData in
typedefs.list.

Fixed in v83-0001.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#347Greg Nancarrow
Greg Nancarrow
gregn4422@gmail.com
In reply to: Peter Smith (#345)

On Tue, Jun 8, 2021 at 4:12 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v83*

Some feedback for the v83 patch set:

v83-0001:

(1) doc/src/sgml/protocol.sgml

(i) Remove extra space:

BEFORE:
+         The transaction will be  decoded and transmitted at
AFTER:
+         The transaction will be decoded and transmitted at
(ii)
BEFORE:
+   contains Stream Commit or Stream Abort message.
AFTER:
+   contains a Stream Commit or Stream Abort message.
(iii)
BEFORE:
+                The LSN of the commit prepared.
AFTER:
+                The LSN of the commit prepared transaction.

(iv) Should documentation say "prepared transaction" as opposed to
"prepare transaction" ???

BEFORE:
+                The end LSN of the prepare transaction.
AFTER:
+                The end LSN of the prepared transaction.

(2) doc/src/sgml/ref/create_subscription.sgml

(i)
BEFORE:
+          The <literal>streaming</literal> option cannot be used along with
+          <literal>two_phase</literal> option.
AFTER:
+          The <literal>streaming</literal> option cannot be used with the
+          <literal>two_phase</literal> option.

(3) doc/src/sgml/ref/create_subscription.sgml

(i)
BEFORE:
+          prepared on publisher is decoded as normal transaction at commit.
AFTER:
+          prepared on the publisher is decoded as a normal
transaction at commit.
(ii)
BEFORE:
+          The <literal>two_phase</literal> option cannot be used along with
+          <literal>streaming</literal> option.
AFTER:
+          The <literal>two_phase</literal> option cannot be used with the
+          <literal>streaming</literal> option.

(4) src/backend/access/transam/twophase.c

(i)
BEFORE:
+ * Check if the prepared transaction with the given GID, lsn and timestamp
+ * is around.
AFTER:
+ * Check if the prepared transaction with the given GID, lsn and timestamp
+ * exists.

(5) src/backend/access/transam/twophase.c

Question:

Is:

+ * do this optimization if we encounter many collisions in GID

meant to be:

+ * do this optimization if we encounter any collisions in GID

???

(6) src/backend/replication/logical/decode.c

Grammar:

BEFORE:
+ * distributed 2PC. This can be avoided by disallowing to
+ * prepare transactions that have locked [user] catalog tables
+ * exclusively but as of now we ask users not to do such
+ * operation.
AFTER:
+ * distributed 2PC. This can be avoided by disallowing
+ * prepared transactions that have locked [user] catalog tables
+ * exclusively but as of now we ask users not to do such an
+ * operation.

(7) src/backend/replication/logical/logical.c

From the comment above it, it's not clear if the "&=" in the following
line is intentional:

+ ctx->twophase &= (ctx->twophase_opt_given || slot->data.two_phase);

Also, the boolean conditions tested are in the reverse order of what
is mentioned in that comment.
Based on the comment, I would expect the following code:

+ ctx->twophase = (slot->data.two_phase || ctx->twophase_opt_given);

Please check it, and maybe update the comment if "&=" is really intended.

There are TWO places where this same code is used.

(8) src/backend/replication/logical/tablesync.c

In the following code, "has_subrels" should be a bool, not an int.

+static bool
+FetchTableStates(bool *started_tx)
+{
+ static int has_subrels = false;

(9) src/backend/replication/logical/worker.c

Mixed current/past tense:

BEFORE:
+ * was still busy (see the condition of should_apply_changes_for_rel). The
AFTER:
+ * is still busy (see the condition of should_apply_changes_for_rel). The

(10)

2 places:

BEFORE:
+ /* there is no transaction when COMMIT PREPARED is called */
AFTER:
+ /* There is no transaction when COMMIT PREPARED is called */

v83-0002:

1) doc/src/sgml/protocol.sgml

BEFORE:
+   contains Stream Prepare or Stream Commit or Stream Abort message.
AFTER:
+   contains a Stream Prepare or Stream Commit or Stream Abort message.

v83-0003:

1) src/backend/replication/pgoutput/pgoutput.c

i) In pgoutput_commit_txn(), the following code that pfree()s a
pointer in a struct, without then NULLing it out, seems dangerous to
me (because what is to stop other code, either now or in the future,
from subsequently referencing that freed data or perhaps trying to
pfree() again?):

+ PGOutputTxnData *data = (PGOutputTxnData *) txn->output_plugin_private;
+ bool            skip;
+
+ Assert(data);
+ skip = !data->sent_begin_txn;
+ pfree(data);

I suggest adding the following line of code after the pfree():
+ txn->output_plugin_private = NULL;

ii) In pgoutput_commit_prepared_txn(), there's the same type of code:

+ if (data)
+ {
+ bool skip = !data->sent_begin_txn;
+ pfree(data);
+ if (skip)
+ return;
+ }

I suggest adding the following line after the pfree() above:

+ txn->output_plugin_private = NULL;

iii) Again, same thing in pgoutput_rollback_prepared_txn():

I suggest adding the following line after the pfree() above:

+ txn->output_plugin_private = NULL;

Regards,
Greg Nancarrow
Fujitsu Australia

#348Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Peter Smith (#346)
3 attachment(s)

On Tue, Jun 8, 2021 at 4:19 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Thu, Jun 3, 2021 at 7:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Jun 2, 2021 at 4:34 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v82*

Attaching patchset-v84 that addresses some of Amit's and Vignesh's comments:
This patch-set also modifies the test case added for copy_data = false
to check that two-phase
transactions are decoded correctly.

2.
@@ -85,11 +85,16 @@ typedef struct LogicalDecodingContext
bool streaming;

/*
- * Does the output plugin support two-phase decoding, and is it enabled?
+ * Does the output plugin support two-phase decoding.
*/
bool twophase;
/*
+ * Is two-phase option given by output plugin?
+ */
+ bool twophase_opt_given;
+
+ /*
* State for writing output.

I think we can write few comments as to why we need a separate
twophase parameter here? The description of twophase_opt_given can be
changed to: "Is two-phase option given by output plugin? This is to
allow output plugins to enable two_phase at the start of streaming. We
can't rely on twophase parameter that tells whether the plugin
provides all the necessary two_phase APIs for this purpose." Feel free
to add more to it.

TODO

Added comments here.

3.
@@ -432,10 +432,19 @@ CreateInitDecodingContext(const char *plugin,
MemoryContextSwitchTo(old_context);

/*
- * We allow decoding of prepared transactions iff the two_phase option is
- * enabled at the time of slot creation.
+ * We allow decoding of prepared transactions when the two_phase is
+ * enabled at the time of slot creation, or when the two_phase option is
+ * given at the streaming start.
*/
- ctx->twophase &= MyReplicationSlot->data.two_phase;
+ ctx->twophase &= (ctx->twophase_opt_given || slot->data.two_phase);
+
+ /* Mark slot to allow two_phase decoding if not already marked */
+ if (ctx->twophase && !slot->data.two_phase)
+ {
+ slot->data.two_phase = true;
+ ReplicationSlotMarkDirty();
+ ReplicationSlotSave();
+ }

Why do we need to change this during CreateInitDecodingContext which
is called at create_slot time? At that time, we don't need to consider
any options and there is no need to toggle slot's two_phase value.

TODO

As part of the recent changes, we do turn on two_phase at create_slot time when
the subscription is created with (copy_data = false, two_phase = on).
So, this code is required.

Amit:

"1.
-   <term><literal>CREATE_REPLICATION_SLOT</literal> <replaceable
class="parameter">slot_name</replaceable> [
<literal>TEMPORARY</literal> ] { <literal>PHYSICAL</literal> [
<literal>RESERVE_WAL</literal> ] | <literal>LOGICAL</literal>
<replaceable class="parameter">output_plugin</replaceable> [
<literal>EXPORT_SNAPSHOT</literal> |
<literal>NOEXPORT_SNAPSHOT</literal> | <literal>USE_SNAPSHOT</literal>
] }
+   <term><literal>CREATE_REPLICATION_SLOT</literal> <replaceable
class="parameter">slot_name</replaceable> [
<literal>TEMPORARY</literal> ] [ <literal>TWO_PHASE</literal> ] {
<literal>PHYSICAL</literal> [ <literal>RESERVE_WAL</literal> ] |
<literal>LOGICAL</literal> <replaceable
class="parameter">output_plugin</replaceable> [
<literal>EXPORT_SNAPSHOT</literal> |
<literal>NOEXPORT_SNAPSHOT</literal> | <literal>USE_SNAPSHOT</literal>
] }

Can we do some testing of the code related to this in some way? One
random idea could be to change the current subscriber-side code just
for testing purposes to see if this works. Can we enhance and use
pg_recvlogical to test this? It is possible that if you address
comment number 13 below, this can be tested with Create Subscription
command."

Actually this is tested in the test case added when Create
Subscription with (copy_data = false) because in that case
the slot is created with the two-phase option.

Vignesh's comment:

"We could add some debug level log messages for the transaction that
will be skipped."

Updated debug messages.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v84-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v84-0001-Add-support-for-prepared-transactions-to-built-i.patch
v84-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v84-0002-Add-prepare-API-support-for-streaming-transactio.patch
v84-0003-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v84-0003-Skip-empty-transactions-for-logical-replication.patch
#349Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#348)

On Wed, Jun 9, 2021 at 10:34 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Tue, Jun 8, 2021 at 4:19 PM Peter Smith <smithpb2250@gmail.com> wrote:

3.
@@ -432,10 +432,19 @@ CreateInitDecodingContext(const char *plugin,
MemoryContextSwitchTo(old_context);

/*
- * We allow decoding of prepared transactions iff the two_phase option is
- * enabled at the time of slot creation.
+ * We allow decoding of prepared transactions when the two_phase is
+ * enabled at the time of slot creation, or when the two_phase option is
+ * given at the streaming start.
*/
- ctx->twophase &= MyReplicationSlot->data.two_phase;
+ ctx->twophase &= (ctx->twophase_opt_given || slot->data.two_phase);
+
+ /* Mark slot to allow two_phase decoding if not already marked */
+ if (ctx->twophase && !slot->data.two_phase)
+ {
+ slot->data.two_phase = true;
+ ReplicationSlotMarkDirty();
+ ReplicationSlotSave();
+ }

Why do we need to change this during CreateInitDecodingContext which
is called at create_slot time? At that time, we don't need to consider
any options and there is no need to toggle slot's two_phase value.

TODO

As part of the recent changes, we do turn on two_phase at create_slot time when
the subscription is created with (copy_data = false, two_phase = on).
So, this code is required.

But in that case, won't we deal it with the value passed in
CreateReplicationSlotCmd. It should be enabled after we call
ReplicationSlotCreate.

--
With Regards,
Amit Kapila.

#350Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Greg Nancarrow (#347)

On Wed, Jun 9, 2021 at 9:58 AM Greg Nancarrow <gregn4422@gmail.com> wrote:

(5) src/backend/access/transam/twophase.c

Question:

Is:

+ * do this optimization if we encounter many collisions in GID

meant to be:

+ * do this optimization if we encounter any collisions in GID

No, it should be fine if there are very few collisions.

--
With Regards,
Amit Kapila.

#351Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Greg Nancarrow (#347)
3 attachment(s)

Please find attached the latest patch set v85*

Differences from v84* are:

* Rebased to HEAD @ 10/June.

* This addresses all Greg's feedback comments [1]/messages/by-id/CAJcOf-fPcpe21RciPRn_56FwO6K_B+VcTZ2prAv4xvAk4cqYiQ@mail.gmail.com except.
- Skipped (1).iii. I think this line in the documentation is OK as-is
- Skipped (5). Amit wrote [2]/messages/by-id/CAA4eK1J2XBSbWXcf9P0z30op+GL-cUrrqJuy-kFVmbjS1fx-eQ@mail.gmail.com that this comment is OK as-is
- Every other feedback has been fixed exactly (or close to) the suggestions.

KNOWN ISSUES: This v85 patch was built and tested using yesterday's
master, but due to lots of recent activity in the replication area I
expect it will be broken for HEAD very soon (if not already). I'll
rebase it again ASAP to try to keep it in working order.

----
[1]: /messages/by-id/CAJcOf-fPcpe21RciPRn_56FwO6K_B+VcTZ2prAv4xvAk4cqYiQ@mail.gmail.com
[2]: /messages/by-id/CAA4eK1J2XBSbWXcf9P0z30op+GL-cUrrqJuy-kFVmbjS1fx-eQ@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v85-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v85-0001-Add-support-for-prepared-transactions-to-built-i.patch
v85-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v85-0002-Add-prepare-API-support-for-streaming-transactio.patch
v85-0003-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v85-0003-Skip-empty-transactions-for-logical-replication.patch
#352Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#351)
3 attachment(s)

On Fri, Jun 11, 2021 at 6:34 PM Peter Smith <smithpb2250@gmail.com> wrote:

KNOWN ISSUES: This v85 patch was built and tested using yesterday's
master, but due to lots of recent activity in the replication area I
expect it will be broken for HEAD very soon (if not already). I'll
rebase it again ASAP to try to keep it in working order.

Please find attached the latest patch set v86*

Differences from v86* are:

* Rebased to HEAD @ today.

* Some recent pushes (e.g. [1]https://github.com/postgres/postgres/commit/3a09d75b4f6cabc8331e228b6988dbfcd9afdfbe[2]https://github.com/postgres/postgres/commit/d08237b5b494f96e72220bcef36a14a642969f16[3]https://github.com/postgres/postgres/commit/fe6a20ce54cbbb6fcfe9f6675d563af836ae799a) in the replication area had
broken the v85* patch. v86 is now working for the current HEAD.

NOTE: I only changed what was necessary to get the 2PC patches working
again. Specifically, one of the pushes [3]https://github.com/postgres/postgres/commit/fe6a20ce54cbbb6fcfe9f6675d563af836ae799a changed a number of
protocol Asserts into ereports, but this 2PC patch set also introduces
a number of new Asserts. If you find that any of these new Asserts are
of the same kind which should be changed to ereports (in keeping with
[3]: https://github.com/postgres/postgres/commit/fe6a20ce54cbbb6fcfe9f6675d563af836ae799a

----
[1]: https://github.com/postgres/postgres/commit/3a09d75b4f6cabc8331e228b6988dbfcd9afdfbe
[2]: https://github.com/postgres/postgres/commit/d08237b5b494f96e72220bcef36a14a642969f16
[3]: https://github.com/postgres/postgres/commit/fe6a20ce54cbbb6fcfe9f6675d563af836ae799a

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v86-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v86-0001-Add-support-for-prepared-transactions-to-built-i.patch
v86-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v86-0002-Add-prepare-API-support-for-streaming-transactio.patch
v86-0003-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v86-0003-Skip-empty-transactions-for-logical-replication.patch
#353Greg Nancarrow
Greg Nancarrow
gregn4422@gmail.com
In reply to: Peter Smith (#352)

On Wed, Jun 16, 2021 at 9:08 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v86*

A couple of comments:

(1) I think one of my suggested changes was missed (or was that intentional?):

BEFORE:
+                The LSN of the commit prepared.
AFTER:
+                The LSN of the commit prepared transaction.

(2) In light of Tom Lane's recent changes in:

fe6a20ce54cbbb6fcfe9f6675d563af836ae799a (Don't use Asserts to check
for violations of replication protocol)

there appear to be some instances of such code in these patches.

For example, in the v86-0001 patch:

+/*
+ * Handle PREPARE message.
+ */
+static void
+apply_handle_prepare(StringInfo s)
+{
+ LogicalRepPreparedTxnData prepare_data;
+ char gid[GIDSIZE];
+
+ logicalrep_read_prepare(s, &prepare_data);
+
+ Assert(prepare_data.prepare_lsn == remote_final_lsn);

The above Assert() should be changed to something like:

+    if (prepare_data.prepare_lsn != remote_final_lsn)
+        ereport(ERROR,
+                (errcode(ERRCODE_PROTOCOL_VIOLATION),
+                 errmsg_internal("incorrect prepare LSN %X/%X in
prepare message (expected %X/%X)",
+                                 LSN_FORMAT_ARGS(prepare_data.prepare_lsn),
+                                 LSN_FORMAT_ARGS(remote_final_lsn))));

Without being more familiar with this code, it's difficult for me to
judge exactly how many of such cases are in these patches.

Regards,
Greg Nancarrow
Fujitsu Australia

#354Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Peter Smith (#352)
5 attachment(s)

On Wed, Jun 16, 2021 at 9:08 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Fri, Jun 11, 2021 at 6:34 PM Peter Smith <smithpb2250@gmail.com> wrote:

KNOWN ISSUES: This v85 patch was built and tested using yesterday's
master, but due to lots of recent activity in the replication area I
expect it will be broken for HEAD very soon (if not already). I'll
rebase it again ASAP to try to keep it in working order.

Please find attached the latest patch set v86*

I've modified the patchset based on comments received on thread [1]/messages/by-id/64b9f783c6e125f18f88fbc0c0234e34e71d8639.camel@j-davis.com
for the CREATE_REPLICATION_SLOT
changes. Based on the request from that thread, I've taken out those
changes as two new patches (patch-1 and patch-2)
and made this into 5 patches. I've also changed the logic to align
with the changes in the command syntax.

I've also addressed one pending comment from Amit about
CreateInitDecodingContext, I've taken out the logic that
sets slot->data.two_phase, and only kept the logic that sets ctx->twophase.

Before:

- ctx->twophase &= MyReplicationSlot->data.two_phase;
+ ctx->twophase &= (ctx->twophase_opt_given || slot->data.two_phase);
+
+ /* Mark slot to allow two_phase decoding if not already marked */
+ if (ctx->twophase && !slot->data.two_phase)
+ {
+ slot->data.two_phase = true;
+ ReplicationSlotMarkDirty();
+ ReplicationSlotSave();
+ }

After:

- ctx->twophase &= MyReplicationSlot->data.two_phase;
+ ctx->twophase &= slot->data.two_phase;

[1]: /messages/by-id/64b9f783c6e125f18f88fbc0c0234e34e71d8639.camel@j-davis.com

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v87-0001-Add-option-to-set-two-phase-in-CREATE_REPLICATIO.patchapplication/octet-stream; name=v87-0001-Add-option-to-set-two-phase-in-CREATE_REPLICATIO.patch
v87-0003-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v87-0003-Add-support-for-prepared-transactions-to-built-i.patch
v87-0004-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v87-0004-Add-prepare-API-support-for-streaming-transactio.patch
v87-0005-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v87-0005-Skip-empty-transactions-for-logical-replication.patch
v87-0002-Add-support-for-two-phase-decoding-in-pg_recvlog.patchapplication/octet-stream; name=v87-0002-Add-support-for-two-phase-decoding-in-pg_recvlog.patch
#355Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Greg Nancarrow (#353)

On Thu, Jun 17, 2021 at 6:22 PM Greg Nancarrow <gregn4422@gmail.com> wrote:

On Wed, Jun 16, 2021 at 9:08 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v86*

A couple of comments:

(1) I think one of my suggested changes was missed (or was that intentional?):

BEFORE:
+                The LSN of the commit prepared.
AFTER:
+                The LSN of the commit prepared transaction.

No, not missed. I already dismissed that one and wrote about it when I
posted v85 [1]/messages/by-id/CAHut+PvOVkiVBf4P5chdVSoVs5=a=F_GtTSHHoXDb4LiOM_8Qw@mail.gmail.com.

(2) In light of Tom Lane's recent changes in:

fe6a20ce54cbbb6fcfe9f6675d563af836ae799a (Don't use Asserts to check
for violations of replication protocol)

there appear to be some instances of such code in these patches.

Yes, I already noted [2]/messages/by-id/CAHut+Pvdio4=OE6cz5pr8VcJNcAgt5uGBPdKf-tnGEMa1mANGg@mail.gmail.com there are likely to be such cases which need
to be fixed.

For example, in the v86-0001 patch:

+/*
+ * Handle PREPARE message.
+ */
+static void
+apply_handle_prepare(StringInfo s)
+{
+ LogicalRepPreparedTxnData prepare_data;
+ char gid[GIDSIZE];
+
+ logicalrep_read_prepare(s, &prepare_data);
+
+ Assert(prepare_data.prepare_lsn == remote_final_lsn);

The above Assert() should be changed to something like:

+    if (prepare_data.prepare_lsn != remote_final_lsn)
+        ereport(ERROR,
+                (errcode(ERRCODE_PROTOCOL_VIOLATION),
+                 errmsg_internal("incorrect prepare LSN %X/%X in
prepare message (expected %X/%X)",
+                                 LSN_FORMAT_ARGS(prepare_data.prepare_lsn),
+                                 LSN_FORMAT_ARGS(remote_final_lsn))));

Without being more familiar with this code, it's difficult for me to
judge exactly how many of such cases are in these patches.

Thanks for the above example. I will fix this one later, after
receiving some more reviews and reports of other Assert cases just
like this one.

------
[1]: /messages/by-id/CAHut+PvOVkiVBf4P5chdVSoVs5=a=F_GtTSHHoXDb4LiOM_8Qw@mail.gmail.com
[2]: /messages/by-id/CAHut+Pvdio4=OE6cz5pr8VcJNcAgt5uGBPdKf-tnGEMa1mANGg@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

#356vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Ajin Cherian (#354)

On Thu, Jun 17, 2021 at 7:40 PM Ajin Cherian <itsajin@gmail.com> wrote:

On Wed, Jun 16, 2021 at 9:08 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Fri, Jun 11, 2021 at 6:34 PM Peter Smith <smithpb2250@gmail.com> wrote:

KNOWN ISSUES: This v85 patch was built and tested using yesterday's
master, but due to lots of recent activity in the replication area I
expect it will be broken for HEAD very soon (if not already). I'll
rebase it again ASAP to try to keep it in working order.

Please find attached the latest patch set v86*

I've modified the patchset based on comments received on thread [1]
for the CREATE_REPLICATION_SLOT
changes. Based on the request from that thread, I've taken out those
changes as two new patches (patch-1 and patch-2)
and made this into 5 patches. I've also changed the logic to align
with the changes in the command syntax.

Few comments:
1) This content is present in
v87-0001-Add-option-to-set-two-phase-in-CREATE_REPLICATIO.patch and
v87-0003-Add-support-for-prepared-transactions-to-built-i.patch, it
can be removed from one of them
       <varlistentry>
+       <term><literal>TWO_PHASE</literal></term>
+       <listitem>
+        <para>
+         Specify that this logical replication slot supports decoding
of two-phase
+         transactions. With this option, two-phase commands like
+         <literal>PREPARE TRANSACTION</literal>, <literal>COMMIT
PREPARED</literal>
+         and <literal>ROLLBACK PREPARED</literal> are decoded and transmitted.
+         The transaction will be decoded and transmitted at
+         <literal>PREPARE TRANSACTION</literal> time.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry>

2) This change is not required, it can be removed:
<sect1 id="logicaldecoding-example">
<title>Logical Decoding Examples</title>
-
<para>
The following example demonstrates controlling logical decoding using the
SQL interface.

3) We could add comment mentioning example 1 at the beginning of
example 1 and example 2 for the newly added example with description,
that will clearly mark the examples.
COMMIT 693
 <keycombo action="simul"><keycap>Control</keycap><keycap>C</keycap></keycombo>
 $ pg_recvlogical -d postgres --slot=test --drop-slot
+
+$ pg_recvlogical -d postgres --slot=test --create-slot --two-phase
+$ pg_recvlogical -d postgres --slot=test --start -f -
4) You could mention "Before you use two-phase commit commands, you
must set max_prepared_transactions to at least 1" for example 2.
 $ pg_recvlogical -d postgres --slot=test --drop-slot
+
+$ pg_recvlogical -d postgres --slot=test --create-slot --two-phase
+$ pg_recvlogical -d postgres --slot=test --start -f -
5) This should be before verbose, the options are documented alphabetically
+     <varlistentry>
+       <term><option>-t</option></term>
+       <term><option>--two-phase</option></term>
+       <listitem>
+       <para>
+        Enables two-phase decoding. This option should only be used with
+        <option>--create-slot</option>
+       </para>
+       </listitem>
+     </varlistentry>

6) This should be before verbose, the options are printed alphabetically
printf(_(" -v, --verbose output verbose messages\n"));
+ printf(_(" -t, --two-phase enable two-phase decoding
when creating a slot\n"));
printf(_(" -V, --version output version information,
then exit\n"));

Regards,
Vignesh

#357Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#355)

On Fri, Jun 18, 2021 at 7:43 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Thu, Jun 17, 2021 at 6:22 PM Greg Nancarrow <gregn4422@gmail.com> wrote:

For example, in the v86-0001 patch:

+/*
+ * Handle PREPARE message.
+ */
+static void
+apply_handle_prepare(StringInfo s)
+{
+ LogicalRepPreparedTxnData prepare_data;
+ char gid[GIDSIZE];
+
+ logicalrep_read_prepare(s, &prepare_data);
+
+ Assert(prepare_data.prepare_lsn == remote_final_lsn);

The above Assert() should be changed to something like:

+    if (prepare_data.prepare_lsn != remote_final_lsn)
+        ereport(ERROR,
+                (errcode(ERRCODE_PROTOCOL_VIOLATION),
+                 errmsg_internal("incorrect prepare LSN %X/%X in
prepare message (expected %X/%X)",
+                                 LSN_FORMAT_ARGS(prepare_data.prepare_lsn),
+                                 LSN_FORMAT_ARGS(remote_final_lsn))));

Without being more familiar with this code, it's difficult for me to
judge exactly how many of such cases are in these patches.

Thanks for the above example. I will fix this one later, after
receiving some more reviews and reports of other Assert cases just
like this one.

I think on similar lines below asserts also need to be changed.

1.
+static void
+apply_handle_begin_prepare(StringInfo s)
+{
+ LogicalRepPreparedTxnData begin_data;
+ char gid[GIDSIZE];
+
+ /* Tablesync should never receive prepare. */
+ Assert(!am_tablesync_worker());
2.
+static void
+TwoPhaseTransactionGid(Oid subid, TransactionId xid, char *gid, int szgid)
+{
..
+ Assert(TransactionIdIsValid(xid));
3.
+static void
+apply_handle_stream_prepare(StringInfo s)
+{
+ int nchanges = 0;
+ LogicalRepPreparedTxnData prepare_data;
+ TransactionId xid;
+ char gid[GIDSIZE];
+
..
..
+
+ /* Tablesync should never receive prepare. */
+ Assert(!am_tablesync_worker());

--
With Regards,
Amit Kapila.

#358Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#357)
5 attachment(s)

On Fri, Jun 18, 2021 at 3:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Jun 18, 2021 at 7:43 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Thu, Jun 17, 2021 at 6:22 PM Greg Nancarrow <gregn4422@gmail.com> wrote:

For example, in the v86-0001 patch:

+/*
+ * Handle PREPARE message.
+ */
+static void
+apply_handle_prepare(StringInfo s)
+{
+ LogicalRepPreparedTxnData prepare_data;
+ char gid[GIDSIZE];
+
+ logicalrep_read_prepare(s, &prepare_data);
+
+ Assert(prepare_data.prepare_lsn == remote_final_lsn);

The above Assert() should be changed to something like:

+    if (prepare_data.prepare_lsn != remote_final_lsn)
+        ereport(ERROR,
+                (errcode(ERRCODE_PROTOCOL_VIOLATION),
+                 errmsg_internal("incorrect prepare LSN %X/%X in
prepare message (expected %X/%X)",
+                                 LSN_FORMAT_ARGS(prepare_data.prepare_lsn),
+                                 LSN_FORMAT_ARGS(remote_final_lsn))));

Without being more familiar with this code, it's difficult for me to
judge exactly how many of such cases are in these patches.

Thanks for the above example. I will fix this one later, after
receiving some more reviews and reports of other Assert cases just
like this one.

I think on similar lines below asserts also need to be changed.

1.
+static void
+apply_handle_begin_prepare(StringInfo s)
+{
+ LogicalRepPreparedTxnData begin_data;
+ char gid[GIDSIZE];
+
+ /* Tablesync should never receive prepare. */
+ Assert(!am_tablesync_worker());
2.
+static void
+TwoPhaseTransactionGid(Oid subid, TransactionId xid, char *gid, int szgid)
+{
..
+ Assert(TransactionIdIsValid(xid));
3.
+static void
+apply_handle_stream_prepare(StringInfo s)
+{
+ int nchanges = 0;
+ LogicalRepPreparedTxnData prepare_data;
+ TransactionId xid;
+ char gid[GIDSIZE];
+
..
..
+
+ /* Tablesync should never receive prepare. */
+ Assert(!am_tablesync_worker());

Please find attached the latest patch set v88*

Differences from v87* are:

* Rebased to HEAD @ today.

* Replaces several protocol Asserts with ereports
(ERRCODE_PROTOCOL_VIOLATION) in patch 0003 and 0004, as reported by
Greg [1]/messages/by-id/CAHut+PuJKTNRjFre0VBufWMz9BEScC__nT+PUhbSaUNW2biPow@mail.gmail.com and Amit [2]/messages/by-id/CAA4eK1JO3HsOurS988=Jarej=AK6ChE1tLuMNP=AZCt6--hVrw@mail.gmail.com. This is in keeping with the commit [3]https://github.com/postgres/postgres/commit/fe6a20ce54cbbb6fcfe9f6675d563af836ae799a.

----
[1]: /messages/by-id/CAHut+PuJKTNRjFre0VBufWMz9BEScC__nT+PUhbSaUNW2biPow@mail.gmail.com
[2]: /messages/by-id/CAA4eK1JO3HsOurS988=Jarej=AK6ChE1tLuMNP=AZCt6--hVrw@mail.gmail.com
[3]: https://github.com/postgres/postgres/commit/fe6a20ce54cbbb6fcfe9f6675d563af836ae799a

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v88-0001-Add-option-to-set-two-phase-in-CREATE_REPLICATIO.patchapplication/octet-stream; name=v88-0001-Add-option-to-set-two-phase-in-CREATE_REPLICATIO.patch
v88-0002-Add-support-for-two-phase-decoding-in-pg_recvlog.patchapplication/octet-stream; name=v88-0002-Add-support-for-two-phase-decoding-in-pg_recvlog.patch
v88-0005-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v88-0005-Skip-empty-transactions-for-logical-replication.patch
v88-0004-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v88-0004-Add-prepare-API-support-for-streaming-transactio.patch
v88-0003-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v88-0003-Add-support-for-prepared-transactions-to-built-i.patch
#359Greg Nancarrow
Greg Nancarrow
gregn4422@gmail.com
In reply to: Peter Smith (#358)

On Mon, Jun 21, 2021 at 4:37 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v88*

Some minor comments:

(1)
v88-0002

doc/src/sgml/logicaldecoding.sgml

"examples shows" is not correct.
I think there is only ONE example being referred to.

BEFORE:
+    The following examples shows how logical decoding is controlled over the
AFTER:
+    The following example shows how logical decoding is controlled over the

(2)
v88 - 0003

doc/src/sgml/ref/create_subscription.sgml

(i)

BEFORE:
+          to the subscriber on the PREPARE TRANSACTION. By default,
the transaction
+          prepared on publisher is decoded as a normal transaction at commit.
AFTER:
+          to the subscriber on the PREPARE TRANSACTION. By default,
the transaction
+          prepared on the publisher is decoded as a normal
transaction at commit time.

(ii)

src/backend/access/transam/twophase.c

The double-bracketing is unnecessary:

BEFORE:
+ if ((gxact->valid && strcmp(gxact->gid, gid) == 0))
AFTER:
+ if (gxact->valid && strcmp(gxact->gid, gid) == 0)

(iii)

src/backend/replication/logical/snapbuild.c

Need to add some commas to make the following easier to read, and
change "needs" to "need":

BEFORE:
+ * The prepared transactions that were skipped because previously
+ * two-phase was not enabled or are not covered by initial snapshot needs
+ * to be sent later along with commit prepared and they must be before
+ * this point.
AFTER:
+ * The prepared transactions, that were skipped because previously
+ * two-phase was not enabled or are not covered by initial snapshot, need
+ * to be sent later along with commit prepared and they must be before
+ * this point.

(iv)

src/backend/replication/logical/tablesync.c

I think the convention used in Postgres code is to check for empty
Lists using "== NIL" and non-empty Lists using "!= NIL".

BEFORE:
+ if (table_states_not_ready && !last_start_times)
AFTER:
+ if (table_states_not_ready != NIL && !last_start_times)
BEFORE:
+ else if (!table_states_not_ready && last_start_times)
AFTER:
+ else if (table_states_not_ready == NIL && last_start_times)

Regards,
Greg Nancarrow
Fujitsu Australia

#360Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Greg Nancarrow (#359)
5 attachment(s)

On Tue, Jun 22, 2021 at 3:36 PM Greg Nancarrow <gregn4422@gmail.com> wrote:

Some minor comments:

(1)
v88-0002

doc/src/sgml/logicaldecoding.sgml

"examples shows" is not correct.
I think there is only ONE example being referred to.

BEFORE:
+    The following examples shows how logical decoding is controlled over the
AFTER:
+    The following example shows how logical decoding is controlled over the

fixed.

(2)
v88 - 0003

doc/src/sgml/ref/create_subscription.sgml

(i)

BEFORE:
+          to the subscriber on the PREPARE TRANSACTION. By default,
the transaction
+          prepared on publisher is decoded as a normal transaction at commit.
AFTER:
+          to the subscriber on the PREPARE TRANSACTION. By default,
the transaction
+          prepared on the publisher is decoded as a normal
transaction at commit time.

fixed.

(ii)

src/backend/access/transam/twophase.c

The double-bracketing is unnecessary:

BEFORE:
+ if ((gxact->valid && strcmp(gxact->gid, gid) == 0))
AFTER:
+ if (gxact->valid && strcmp(gxact->gid, gid) == 0)

fixed.

(iii)

src/backend/replication/logical/snapbuild.c

Need to add some commas to make the following easier to read, and
change "needs" to "need":

BEFORE:
+ * The prepared transactions that were skipped because previously
+ * two-phase was not enabled or are not covered by initial snapshot needs
+ * to be sent later along with commit prepared and they must be before
+ * this point.
AFTER:
+ * The prepared transactions, that were skipped because previously
+ * two-phase was not enabled or are not covered by initial snapshot, need
+ * to be sent later along with commit prepared and they must be before
+ * this point.

fixed.

(iv)

src/backend/replication/logical/tablesync.c

I think the convention used in Postgres code is to check for empty
Lists using "== NIL" and non-empty Lists using "!= NIL".

BEFORE:
+ if (table_states_not_ready && !last_start_times)
AFTER:
+ if (table_states_not_ready != NIL && !last_start_times)
BEFORE:
+ else if (!table_states_not_ready && last_start_times)
AFTER:
+ else if (table_states_not_ready == NIL && last_start_times)

fixed.

Also fixed comments from Vignesh:

1) This content is present in
v87-0001-Add-option-to-set-two-phase-in-CREATE_REPLICATIO.patch and
v87-0003-Add-support-for-prepared-transactions-to-built-i.patch, it
can be removed from one of them
       <varlistentry>
+       <term><literal>TWO_PHASE</literal></term>
+       <listitem>
+        <para>
+         Specify that this logical replication slot supports decoding
of two-phase
+         transactions. With this option, two-phase commands like
+         <literal>PREPARE TRANSACTION</literal>, <literal>COMMIT
PREPARED</literal>
+         and <literal>ROLLBACK PREPARED</literal> are decoded and transmitted.
+         The transaction will be decoded and transmitted at
+         <literal>PREPARE TRANSACTION</literal> time.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry>

I don't see this duplicate content.

2) This change is not required, it can be removed:
<sect1 id="logicaldecoding-example">
<title>Logical Decoding Examples</title>
-
<para>
The following example demonstrates controlling logical decoding using the
SQL interface.

fixed this.

3) We could add comment mentioning example 1 at the beginning of
example 1 and example 2 for the newly added example with description,
that will clearly mark the examples.

added this.

5) This should be before verbose, the options are documented alphabetically

fixed.this.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v89-0001-Add-option-to-set-two-phase-in-CREATE_REPLICATIO.patchapplication/octet-stream; name=v89-0001-Add-option-to-set-two-phase-in-CREATE_REPLICATIO.patch
v89-0005-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v89-0005-Skip-empty-transactions-for-logical-replication.patch
v89-0004-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v89-0004-Add-prepare-API-support-for-streaming-transactio.patch
v89-0003-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v89-0003-Add-support-for-prepared-transactions-to-built-i.patch
v89-0002-Add-support-for-two-phase-decoding-in-pg_recvlog.patchapplication/octet-stream; name=v89-0002-Add-support-for-two-phase-decoding-in-pg_recvlog.patch
#361vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Ajin Cherian (#360)

On Wed, Jun 23, 2021 at 9:10 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Tue, Jun 22, 2021 at 3:36 PM Greg Nancarrow <gregn4422@gmail.com> wrote:

Some minor comments:

(1)
v88-0002

doc/src/sgml/logicaldecoding.sgml

"examples shows" is not correct.
I think there is only ONE example being referred to.

BEFORE:
+    The following examples shows how logical decoding is controlled over the
AFTER:
+    The following example shows how logical decoding is controlled over the

fixed.

(2)
v88 - 0003

doc/src/sgml/ref/create_subscription.sgml

(i)

BEFORE:
+          to the subscriber on the PREPARE TRANSACTION. By default,
the transaction
+          prepared on publisher is decoded as a normal transaction at commit.
AFTER:
+          to the subscriber on the PREPARE TRANSACTION. By default,
the transaction
+          prepared on the publisher is decoded as a normal
transaction at commit time.

fixed.

(ii)

src/backend/access/transam/twophase.c

The double-bracketing is unnecessary:

BEFORE:
+ if ((gxact->valid && strcmp(gxact->gid, gid) == 0))
AFTER:
+ if (gxact->valid && strcmp(gxact->gid, gid) == 0)

fixed.

(iii)

src/backend/replication/logical/snapbuild.c

Need to add some commas to make the following easier to read, and
change "needs" to "need":

BEFORE:
+ * The prepared transactions that were skipped because previously
+ * two-phase was not enabled or are not covered by initial snapshot needs
+ * to be sent later along with commit prepared and they must be before
+ * this point.
AFTER:
+ * The prepared transactions, that were skipped because previously
+ * two-phase was not enabled or are not covered by initial snapshot, need
+ * to be sent later along with commit prepared and they must be before
+ * this point.

fixed.

(iv)

src/backend/replication/logical/tablesync.c

I think the convention used in Postgres code is to check for empty
Lists using "== NIL" and non-empty Lists using "!= NIL".

BEFORE:
+ if (table_states_not_ready && !last_start_times)
AFTER:
+ if (table_states_not_ready != NIL && !last_start_times)
BEFORE:
+ else if (!table_states_not_ready && last_start_times)
AFTER:
+ else if (table_states_not_ready == NIL && last_start_times)

fixed.

Also fixed comments from Vignesh:

1) This content is present in
v87-0001-Add-option-to-set-two-phase-in-CREATE_REPLICATIO.patch and
v87-0003-Add-support-for-prepared-transactions-to-built-i.patch, it
can be removed from one of them
<varlistentry>
+       <term><literal>TWO_PHASE</literal></term>
+       <listitem>
+        <para>
+         Specify that this logical replication slot supports decoding
of two-phase
+         transactions. With this option, two-phase commands like
+         <literal>PREPARE TRANSACTION</literal>, <literal>COMMIT
PREPARED</literal>
+         and <literal>ROLLBACK PREPARED</literal> are decoded and transmitted.
+         The transaction will be decoded and transmitted at
+         <literal>PREPARE TRANSACTION</literal> time.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry>

I don't see this duplicate content.

Thanks for the updated patch.
The patch v89-0001-Add-option-to-set-two-phase-in-CREATE_REPLICATIO.patch
has the following:
+       <term><literal>TWO_PHASE</literal></term>
+       <listitem>
+        <para>
+         Specify that this logical replication slot supports decoding
of two-phase
+         transactions. With this option, two-phase commands like
+         <literal>PREPARE TRANSACTION</literal>, <literal>COMMIT
PREPARED</literal>
+         and <literal>ROLLBACK PREPARED</literal> are decoded and transmitted.
+         The transaction will be decoded and transmitted at
+         <literal>PREPARE TRANSACTION</literal> time.
+        </para>
+       </listitem>
+      </varlistentry>
The patch v89-0003-Add-support-for-prepared-transactions-to-built-i.patch
has the following:
+       <term><literal>TWO_PHASE</literal></term>
+       <listitem>
+        <para>
+         Specify that this replication slot supports decode of two-phase
+         transactions. With this option, two-phase commands like
+         <literal>PREPARE TRANSACTION</literal>, <literal>COMMIT
PREPARED</literal>
+         and <literal>ROLLBACK PREPARED</literal> are decoded and transmitted.
+         The transaction will be decoded and transmitted at
+         <literal>PREPARE TRANSACTION</literal> time.
+        </para>
+       </listitem>
+      </varlistentry>

We can remove one of them.

Regards,
Vignesh

#362Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: vignesh C (#361)
5 attachment(s)

On Wed, Jun 23, 2021 at 3:18 PM vignesh C <vignesh21@gmail.com> wrote:

Thanks for the updated patch.
The patch v89-0001-Add-option-to-set-two-phase-in-CREATE_REPLICATIO.patch
has the following:
+       <term><literal>TWO_PHASE</literal></term>
+       <listitem>
+        <para>
+         Specify that this logical replication slot supports decoding
of two-phase
+         transactions. With this option, two-phase commands like
+         <literal>PREPARE TRANSACTION</literal>, <literal>COMMIT
PREPARED</literal>
+         and <literal>ROLLBACK PREPARED</literal> are decoded and transmitted.
+         The transaction will be decoded and transmitted at
+         <literal>PREPARE TRANSACTION</literal> time.
+        </para>
+       </listitem>
+      </varlistentry>
The patch v89-0003-Add-support-for-prepared-transactions-to-built-i.patch
has the following:
+       <term><literal>TWO_PHASE</literal></term>
+       <listitem>
+        <para>
+         Specify that this replication slot supports decode of two-phase
+         transactions. With this option, two-phase commands like
+         <literal>PREPARE TRANSACTION</literal>, <literal>COMMIT
PREPARED</literal>
+         and <literal>ROLLBACK PREPARED</literal> are decoded and transmitted.
+         The transaction will be decoded and transmitted at
+         <literal>PREPARE TRANSACTION</literal> time.
+        </para>
+       </listitem>
+      </varlistentry>

We can remove one of them.

I missed this. Updated.

Also fixed this comment below which I had missed in my last patch:

4) You could mention "Before you use two-phase commit commands, you
must set max_prepared_transactions to at least 1" for example 2.
$ pg_recvlogical -d postgres --slot=test --drop-slot
+
+$ pg_recvlogical -d postgres --slot=test --create-slot --two-phase
+$ pg_recvlogical -d postgres --slot=test --start -f -

Comment 6:

6) This should be before verbose, the options are printed alphabetically
printf(_(" -v, --verbose output verbose messages\n"));
+ printf(_(" -t, --two-phase enable two-phase decoding
when creating a slot\n"));
printf(_(" -V, --version output version information,
then exit\n"));

This was also fixed in the last patch.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v90-0004-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v90-0004-Add-prepare-API-support-for-streaming-transactio.patch
v90-0001-Add-option-to-set-two-phase-in-CREATE_REPLICATIO.patchapplication/octet-stream; name=v90-0001-Add-option-to-set-two-phase-in-CREATE_REPLICATIO.patch
v90-0002-Add-support-for-two-phase-decoding-in-pg_recvlog.patchapplication/octet-stream; name=v90-0002-Add-support-for-two-phase-decoding-in-pg_recvlog.patch
v90-0005-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v90-0005-Skip-empty-transactions-for-logical-replication.patch
v90-0003-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v90-0003-Add-support-for-prepared-transactions-to-built-i.patch
#363Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#362)
1 attachment(s)

On Wed, Jun 23, 2021 at 4:10 PM Ajin Cherian <itsajin@gmail.com> wrote:

The first two patches look mostly good to me. I have combined them
into one and made some minor changes. (a) Removed opt_two_phase and
related code from repl_gram.y as that is not required for this version
of the patch. (b) made some changes in docs. Kindly check the attached
and let me know if you have any comments? I am planning to push this
first patch in the series tomorrow unless you or others have any
comments.

--
With Regards,
Amit Kapila.

Attachments:

0001-Allow-enabling-two-phase-option-via-replication-prot.patchapplication/octet-stream; name=0001-Allow-enabling-two-phase-option-via-replication-prot.patch
#364Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#363)

On Tue, Jun 29, 2021 at 4:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Jun 23, 2021 at 4:10 PM Ajin Cherian <itsajin@gmail.com> wrote:

The first two patches look mostly good to me. I have combined them
into one and made some minor changes. (a) Removed opt_two_phase and
related code from repl_gram.y as that is not required for this version
of the patch. (b) made some changes in docs. Kindly check the attached
and let me know if you have any comments? I am planning to push this
first patch in the series tomorrow unless you or others have any
comments.

The patch applies cleanly, tests pass. I reviewed the patch and have
no comments, it looks good.

regards,
Ajin Cherian
Fujitsu Australia

#365vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#363)

On Tue, Jun 29, 2021 at 12:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Jun 23, 2021 at 4:10 PM Ajin Cherian <itsajin@gmail.com> wrote:

The first two patches look mostly good to me. I have combined them
into one and made some minor changes. (a) Removed opt_two_phase and
related code from repl_gram.y as that is not required for this version
of the patch. (b) made some changes in docs. Kindly check the attached
and let me know if you have any comments? I am planning to push this
first patch in the series tomorrow unless you or others have any
comments.

Thanks for the updated patch, patch applies neatly and tests passed.
If you are ok, One of the documentation changes could be slightly
changed while committing:
+       <para>
+        Enables two-phase decoding. This option should only be used with
+        <option>--create-slot</option>
+       </para>
to:
+       <para>
+        Enables two-phase decoding. This option should only be specified with
+        <option>--create-slot</option> option.
+       </para>

Regards,
Vignesh

#366Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#365)

On Tue, Jun 29, 2021 at 5:31 PM vignesh C <vignesh21@gmail.com> wrote:

On Tue, Jun 29, 2021 at 12:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Jun 23, 2021 at 4:10 PM Ajin Cherian <itsajin@gmail.com> wrote:

The first two patches look mostly good to me. I have combined them
into one and made some minor changes. (a) Removed opt_two_phase and
related code from repl_gram.y as that is not required for this version
of the patch. (b) made some changes in docs. Kindly check the attached
and let me know if you have any comments? I am planning to push this
first patch in the series tomorrow unless you or others have any
comments.

Thanks for the updated patch, patch applies neatly and tests passed.
If you are ok, One of the documentation changes could be slightly
changed while committing:

Pushed the patch after taking care of your suggestion. Now, the next
step is to rebase the remaining patches and adapt some of the checks
to PG-15.

--
With Regards,
Amit Kapila.

#367Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#366)
3 attachment(s)

On Wed, Jun 30, 2021 at 6:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Pushed the patch after taking care of your suggestion. Now, the next
step is to rebase the remaining patches and adapt some of the checks
to PG-15.

Please find attached the latest patch set v91*

Differences from v90* are:

* This is the first patch set for PG15

* Rebased to HEAD @ today.

* Now the patch set has only 3 patches again because v90-0001,
v90-0002 are already pushed [1]https://github.com/postgres/postgres/commit/cda03cfed6b8bd5f64567bccbc9578fba035691e

* Bumped all relevant server version checks to 150000

----
[1]: https://github.com/postgres/postgres/commit/cda03cfed6b8bd5f64567bccbc9578fba035691e

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v91-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v91-0001-Add-support-for-prepared-transactions-to-built-i.patch
v91-0003-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v91-0003-Skip-empty-transactions-for-logical-replication.patch
v91-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v91-0002-Add-prepare-API-support-for-streaming-transactio.patch
#368Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Peter Smith (#367)
4 attachment(s)

On Wed, Jun 30, 2021 at 7:47 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Wed, Jun 30, 2021 at 6:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Pushed the patch after taking care of your suggestion. Now, the next
step is to rebase the remaining patches and adapt some of the checks
to PG-15.

Please find attached the latest patch set v91*

Differences from v90* are:

* This is the first patch set for PG15

* Rebased to HEAD @ today.

* Now the patch set has only 3 patches again because v90-0001,
v90-0002 are already pushed [1]

* Bumped all relevant server version checks to 150000

Adding a new patch (0004) to this patch-set that handles skipping of
empty streamed transactions. patch-0003 did not
handle empty streamed transactions. To support this, added a new flag
"sent_stream_start" to PGOutputTxnData.
Also transactions which do not have any data will not be stream
committed or stream prepared or stream aborted.
Do review and let me know if you have any comments.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v92-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v92-0001-Add-support-for-prepared-transactions-to-built-i.patch
v92-0003-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v92-0003-Skip-empty-transactions-for-logical-replication.patch
v92-0004-Skip-empty-streaming-in-progress-transaction-for.patchapplication/octet-stream; name=v92-0004-Skip-empty-streaming-in-progress-transaction-for.patch
v92-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v92-0002-Add-prepare-API-support-for-streaming-transactio.patch
#369tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
In reply to: Ajin Cherian (#368)
RE: [HACKERS] logical decoding of two-phase transactions

On Thursday, July 1, 2021 11:48 AM Ajin Cherian <itsajin@gmail.com>

Adding a new patch (0004) to this patch-set that handles skipping of
empty streamed transactions. patch-0003 did not
handle empty streamed transactions. To support this, added a new flag
"sent_stream_start" to PGOutputTxnData.
Also transactions which do not have any data will not be stream
committed or stream prepared or stream aborted.
Do review and let me know if you have any comments.

Thanks for your patch. I met an issue while using it. When a transaction contains TRUNCATE, the subscriber reported an error: " ERROR: no data left in message" and the data couldn't be replicated.

Steps to reproduce the issue:

(set logical_decoding_work_mem to 64kB at publisher so that streaming could work. )

------publisher------
create table test (a int primary key, b varchar);
create publication pub for table test;

------subscriber------
create table test (a int primary key, b varchar);
create subscription sub connection 'dbname=postgres' publication pub with(two_phase=on, streaming=on);

------publisher------
BEGIN;
TRUNCATE test;
INSERT INTO test SELECT i, md5(i::text) FROM generate_series(1001, 6000) s(i);
UPDATE test SET b = md5(b) WHERE mod(a,2) = 0;
DELETE FROM test WHERE mod(a,3) = 0;
COMMIT;

The above case worked ok when remove 0004 patch, so I think it’s a problem of 0004 patch. Please have a look.

Regards
Tang

#370Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: tanghy.fnst@fujitsu.com (#369)
4 attachment(s)

Please find attached the latest patch set v93*

Differences from v92* are:

* Rebased to HEAD @ today.

This rebase was made necessary by recent changes [1]https://github.com/postgres/postgres/commit/8aafb02616753f5c6c90bbc567636b73c0cbb9d4 to the
parse_subscription_options function.

----
[1]: https://github.com/postgres/postgres/commit/8aafb02616753f5c6c90bbc567636b73c0cbb9d4

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v93-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v93-0002-Add-prepare-API-support-for-streaming-transactio.patch
v93-0004-Skip-empty-streaming-in-progress-transaction-for.patchapplication/octet-stream; name=v93-0004-Skip-empty-streaming-in-progress-transaction-for.patch
v93-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v93-0001-Add-support-for-prepared-transactions-to-built-i.patch
v93-0003-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v93-0003-Skip-empty-transactions-for-logical-replication.patch
#371Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: tanghy.fnst@fujitsu.com (#369)
4 attachment(s)

On Fri, Jul 2, 2021 at 8:18 PM tanghy.fnst@fujitsu.com
<tanghy.fnst@fujitsu.com> wrote:

Thanks for your patch. I met an issue while using it. When a transaction contains TRUNCATE, the subscriber reported an error: " ERROR: no data left in message" and the data couldn't be replicated.

Steps to reproduce the issue:

(set logical_decoding_work_mem to 64kB at publisher so that streaming could work. )

------publisher------
create table test (a int primary key, b varchar);
create publication pub for table test;

------subscriber------
create table test (a int primary key, b varchar);
create subscription sub connection 'dbname=postgres' publication pub with(two_phase=on, streaming=on);

------publisher------
BEGIN;
TRUNCATE test;
INSERT INTO test SELECT i, md5(i::text) FROM generate_series(1001, 6000) s(i);
UPDATE test SET b = md5(b) WHERE mod(a,2) = 0;
DELETE FROM test WHERE mod(a,3) = 0;
COMMIT;

The above case worked ok when remove 0004 patch, so I think it’s a problem of 0004 patch. Please have a look.

thanks for the test!
I hadn't updated the case where sending schema across was the first
change of the transaction as part of the decoding of the
truncate command. In this test case, the schema was sent across
without the stream start, hence the error on the apply worker.
I have updated with a fix. Please do a test and confirm.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v94-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v94-0001-Add-support-for-prepared-transactions-to-built-i.patch
v94-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v94-0002-Add-prepare-API-support-for-streaming-transactio.patch
v94-0004-Skip-empty-streaming-in-progress-transaction-for.patchapplication/octet-stream; name=v94-0004-Skip-empty-streaming-in-progress-transaction-for.patch
v94-0003-Skip-empty-transactions-for-logical-replication.patchapplication/octet-stream; name=v94-0003-Skip-empty-transactions-for-logical-replication.patch
#372tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
In reply to: Ajin Cherian (#371)
RE: [HACKERS] logical decoding of two-phase transactions

On Tuesday, July 6, 2021 7:18 PM Ajin Cherian <itsajin@gmail.com>

thanks for the test!
I hadn't updated the case where sending schema across was the first
change of the transaction as part of the decoding of the
truncate command. In this test case, the schema was sent across
without the stream start, hence the error on the apply worker.
I have updated with a fix. Please do a test and confirm.

Thanks for your patch.
I have tested and confirmed that the issue was fixed.

Regards
Tang

#373Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#370)
1 attachment(s)

On Tue, Jul 6, 2021 at 9:58 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v93*

Thanks, I have gone through the 0001 patch and made a number of
changes. (a) Removed some of the code which was leftover from previous
versions, (b) Removed the Assert in apply_handle_begin_prepare() as I
don't think that makes sense, (c) added/changed comments and made a
few other cosmetic changes, (d) ran pgindent.

Let me know what you think of the attached?

--
With Regards,
Amit Kapila.

Attachments:

v95-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v95-0001-Add-support-for-prepared-transactions-to-built-i.patch
#374vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#373)

On Thu, Jul 8, 2021 at 11:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Jul 6, 2021 at 9:58 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v93*

Thanks, I have gone through the 0001 patch and made a number of
changes. (a) Removed some of the code which was leftover from previous
versions, (b) Removed the Assert in apply_handle_begin_prepare() as I
don't think that makes sense, (c) added/changed comments and made a
few other cosmetic changes, (d) ran pgindent.

Let me know what you think of the attached?

The patch looks good to me, I don't have any comments.

Regards,
Vignesh

#375Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#374)

On Thu, Jul 8, 2021 at 10:08 PM vignesh C <vignesh21@gmail.com> wrote:

On Thu, Jul 8, 2021 at 11:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Jul 6, 2021 at 9:58 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v93*

Thanks, I have gone through the 0001 patch and made a number of
changes. (a) Removed some of the code which was leftover from previous
versions, (b) Removed the Assert in apply_handle_begin_prepare() as I
don't think that makes sense, (c) added/changed comments and made a
few other cosmetic changes, (d) ran pgindent.

Let me know what you think of the attached?

The patch looks good to me, I don't have any comments.

I tried the v95-0001 patch.

- The patch applied cleanly and all build / testing was OK.
- The documentation also builds OK.
- I checked all v95-0001 / v93-0001 differences and found no problems.
- Furthermore, I noted that v95-0001 patch is passing the cfbot [1]http://cfbot.cputube.org/patch_33_2914.log.

So this patch LGTM.

------
[1]: http://cfbot.cputube.org/patch_33_2914.log

Kind Regards,
Peter Smith.
Fujitsu Australia

#376Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Peter Smith (#375)

On Fri, Jul 9, 2021 at 9:13 AM Peter Smith <smithpb2250@gmail.com> wrote:

I tried the v95-0001 patch.

- The patch applied cleanly and all build / testing was OK.
- The documentation also builds OK.
- I checked all v95-0001 / v93-0001 differences and found no problems.
- Furthermore, I noted that v95-0001 patch is passing the cfbot [1].

So this patch LGTM.

Applied, reviewed and tested the patch.
Also ran a 5 level cascaded standby setup running a modified pgbench
that does two phase commits and it ran fine.
Did some testing using empty transactions and no issues found
The patch looks good to me.

regards,
Ajin Cherian

#377tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
In reply to: Ajin Cherian (#376)
RE: [HACKERS] logical decoding of two-phase transactions

On Friday, July 9, 2021 2:56 PM Ajin Cherian <itsajin@gmail.com>wrote:

On Fri, Jul 9, 2021 at 9:13 AM Peter Smith <smithpb2250@gmail.com> wrote:

I tried the v95-0001 patch.

- The patch applied cleanly and all build / testing was OK.
- The documentation also builds OK.
- I checked all v95-0001 / v93-0001 differences and found no problems.
- Furthermore, I noted that v95-0001 patch is passing the cfbot [1].

So this patch LGTM.

Applied, reviewed and tested the patch.
Also ran a 5 level cascaded standby setup running a modified pgbench
that does two phase commits and it ran fine.
Did some testing using empty transactions and no issues found
The patch looks good to me.

I did some cross version tests on patch v95 (publisher is PG14 and subscriber is PG15, or publisher is PG15 and subscriber is PG14; set two_phase option to on or off/default). It worked as expected, data could be replicated correctly.

Besides, I tested some scenarios using synchronized replication, it worked fine in my cases.

So this patch LGTM.

Regards
Tang

#378Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#375)
1 attachment(s)

On Fri, Jul 9, 2021 at 4:43 AM Peter Smith <smithpb2250@gmail.com> wrote:

The patch looks good to me, I don't have any comments.

I tried the v95-0001 patch.

- The patch applied cleanly and all build / testing was OK.
- The documentation also builds OK.
- I checked all v95-0001 / v93-0001 differences and found no problems.
- Furthermore, I noted that v95-0001 patch is passing the cfbot [1].

So this patch LGTM.

Thanks, I took another pass over it and made a few changes in docs and
comments. I am planning to push this next week sometime (by 14th July)
unless there are more comments from you or someone else. Just to
summarize, this patch will add support for prepared transactions to
built-in logical replication. To add support for streaming
transactions at prepare time into the
built-in logical replication, we need to do the following things: (a)
Modify the output plugin (pgoutput) to implement the new two-phase API
callbacks, by leveraging the extended replication protocol. (b) Modify
the replication apply worker, to properly handle two-phase
transactions by replaying them on prepare. (c) Add a new SUBSCRIPTION
option "two_phase" to allow users to enable
two-phase transactions. We enable the two_phase once the initial data
sync is over. Refer to comments atop worker.c in the patch and commit
message to see further details about this patch. After this patch,
there is a follow-up patch to allow streaming and two-phase options
together which I feel needs some more review and can be committed
separately.

--
With Regards,
Amit Kapila.

Attachments:

v96-0001-Add-support-for-prepared-transactions-to-built-i.patchapplication/octet-stream; name=v96-0001-Add-support-for-prepared-transactions-to-built-i.patch
#379Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#378)

On Sun, Jul 11, 2021 at 8:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Jul 9, 2021 at 4:43 AM Peter Smith <smithpb2250@gmail.com> wrote:

The patch looks good to me, I don't have any comments.

I tried the v95-0001 patch.

- The patch applied cleanly and all build / testing was OK.
- The documentation also builds OK.
- I checked all v95-0001 / v93-0001 differences and found no problems.
- Furthermore, I noted that v95-0001 patch is passing the cfbot [1].

So this patch LGTM.

Thanks, I took another pass over it and made a few changes in docs and
comments. I am planning to push this next week sometime (by 14th July)
unless there are more comments from you or someone else. Just to
summarize, this patch will add support for prepared transactions to
built-in logical replication. To add support for streaming
transactions at prepare time into the
built-in logical replication, we need to do the following things: (a)
Modify the output plugin (pgoutput) to implement the new two-phase API
callbacks, by leveraging the extended replication protocol. (b) Modify
the replication apply worker, to properly handle two-phase
transactions by replaying them on prepare. (c) Add a new SUBSCRIPTION
option "two_phase" to allow users to enable
two-phase transactions. We enable the two_phase once the initial data
sync is over. Refer to comments atop worker.c in the patch and commit
message to see further details about this patch. After this patch,
there is a follow-up patch to allow streaming and two-phase options
together which I feel needs some more review and can be committed
separately.

FYI - I repeated the same verification of the v96-0001 patch as I did
previously for v95-0001

- The v96 patch applied cleanly and all build / testing was OK.
- The documentation also builds OK.
- I checked the v95-0001 / v96-0001 differences and found no problems.
- Furthermore, I noted that v96-0001 patch is passing the cfbot.

LGTM.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#380Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#379)

On Mon, Jul 12, 2021 at 9:14 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Sun, Jul 11, 2021 at 8:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Jul 9, 2021 at 4:43 AM Peter Smith <smithpb2250@gmail.com> wrote:

The patch looks good to me, I don't have any comments.

I tried the v95-0001 patch.

- The patch applied cleanly and all build / testing was OK.
- The documentation also builds OK.
- I checked all v95-0001 / v93-0001 differences and found no problems.
- Furthermore, I noted that v95-0001 patch is passing the cfbot [1].

So this patch LGTM.

Thanks, I took another pass over it and made a few changes in docs and
comments. I am planning to push this next week sometime (by 14th July)
unless there are more comments from you or someone else. Just to
summarize, this patch will add support for prepared transactions to
built-in logical replication. To add support for streaming
transactions at prepare time into the
built-in logical replication, we need to do the following things: (a)
Modify the output plugin (pgoutput) to implement the new two-phase API
callbacks, by leveraging the extended replication protocol. (b) Modify
the replication apply worker, to properly handle two-phase
transactions by replaying them on prepare. (c) Add a new SUBSCRIPTION
option "two_phase" to allow users to enable
two-phase transactions. We enable the two_phase once the initial data
sync is over. Refer to comments atop worker.c in the patch and commit
message to see further details about this patch. After this patch,
there is a follow-up patch to allow streaming and two-phase options
together which I feel needs some more review and can be committed
separately.

FYI - I repeated the same verification of the v96-0001 patch as I did
previously for v95-0001

- The v96 patch applied cleanly and all build / testing was OK.
- The documentation also builds OK.
- I checked the v95-0001 / v96-0001 differences and found no problems.
- Furthermore, I noted that v96-0001 patch is passing the cfbot.

LGTM.

Pushed.

Feel free to submit the remaining patches after rebase. Is it possible
to post patches related to skipping empty transactions in the other
thread [1]/messages/by-id/CAMkU=1yohp9-dv48FLoSPrMqYEyyS5ZWkaZGD41RJr10xiNo_Q@mail.gmail.com where that topic is being discussed?

[1]: /messages/by-id/CAMkU=1yohp9-dv48FLoSPrMqYEyyS5ZWkaZGD41RJr10xiNo_Q@mail.gmail.com

--
With Regards,
Amit Kapila.

#381Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#380)
1 attachment(s)

On Wed, Jul 14, 2021 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Jul 12, 2021 at 9:14 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Sun, Jul 11, 2021 at 8:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Jul 9, 2021 at 4:43 AM Peter Smith <smithpb2250@gmail.com> wrote:

The patch looks good to me, I don't have any comments.

I tried the v95-0001 patch.

- The patch applied cleanly and all build / testing was OK.
- The documentation also builds OK.
- I checked all v95-0001 / v93-0001 differences and found no problems.
- Furthermore, I noted that v95-0001 patch is passing the cfbot [1].

So this patch LGTM.

Thanks, I took another pass over it and made a few changes in docs and
comments. I am planning to push this next week sometime (by 14th July)
unless there are more comments from you or someone else. Just to
summarize, this patch will add support for prepared transactions to
built-in logical replication. To add support for streaming
transactions at prepare time into the
built-in logical replication, we need to do the following things: (a)
Modify the output plugin (pgoutput) to implement the new two-phase API
callbacks, by leveraging the extended replication protocol. (b) Modify
the replication apply worker, to properly handle two-phase
transactions by replaying them on prepare. (c) Add a new SUBSCRIPTION
option "two_phase" to allow users to enable
two-phase transactions. We enable the two_phase once the initial data
sync is over. Refer to comments atop worker.c in the patch and commit
message to see further details about this patch. After this patch,
there is a follow-up patch to allow streaming and two-phase options
together which I feel needs some more review and can be committed
separately.

FYI - I repeated the same verification of the v96-0001 patch as I did
previously for v95-0001

- The v96 patch applied cleanly and all build / testing was OK.
- The documentation also builds OK.
- I checked the v95-0001 / v96-0001 differences and found no problems.
- Furthermore, I noted that v96-0001 patch is passing the cfbot.

LGTM.

Pushed.

Feel free to submit the remaining patches after rebase. Is it possible
to post patches related to skipping empty transactions in the other
thread [1] where that topic is being discussed?

[1] - /messages/by-id/CAMkU=1yohp9-dv48FLoSPrMqYEyyS5ZWkaZGD41RJr10xiNo_Q@mail.gmail.com

Please find attached the latest patch set v97*

* Rebased v94* to HEAD @ today.

This rebase was made necessary by the recent push of the first patch
from this set.

v94-0001 ==> already pushed [1]https://github.com/postgres/postgres/commit/a8fd13cab0ba815e9925dc9676e6309f699b5f72
v94-0002 ==> v97-0001
v94-0003 ==> will be relocated to other thread [2]/messages/by-id/CAMkU=1yohp9-dv48FLoSPrMqYEyyS5ZWkaZGD41RJr10xiNo_Q@mail.gmail.com
v94-0004 ==> this is omitted for now

----
[1]: https://github.com/postgres/postgres/commit/a8fd13cab0ba815e9925dc9676e6309f699b5f72
[2]: /messages/by-id/CAMkU=1yohp9-dv48FLoSPrMqYEyyS5ZWkaZGD41RJr10xiNo_Q@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v97-0001-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v97-0001-Add-prepare-API-support-for-streaming-transactio.patch
#382vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#381)

On Wed, Jul 14, 2021 at 2:03 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Wed, Jul 14, 2021 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Jul 12, 2021 at 9:14 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Sun, Jul 11, 2021 at 8:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Jul 9, 2021 at 4:43 AM Peter Smith <smithpb2250@gmail.com> wrote:

The patch looks good to me, I don't have any comments.

I tried the v95-0001 patch.

- The patch applied cleanly and all build / testing was OK.
- The documentation also builds OK.
- I checked all v95-0001 / v93-0001 differences and found no problems.
- Furthermore, I noted that v95-0001 patch is passing the cfbot [1].

So this patch LGTM.

Thanks, I took another pass over it and made a few changes in docs and
comments. I am planning to push this next week sometime (by 14th July)
unless there are more comments from you or someone else. Just to
summarize, this patch will add support for prepared transactions to
built-in logical replication. To add support for streaming
transactions at prepare time into the
built-in logical replication, we need to do the following things: (a)
Modify the output plugin (pgoutput) to implement the new two-phase API
callbacks, by leveraging the extended replication protocol. (b) Modify
the replication apply worker, to properly handle two-phase
transactions by replaying them on prepare. (c) Add a new SUBSCRIPTION
option "two_phase" to allow users to enable
two-phase transactions. We enable the two_phase once the initial data
sync is over. Refer to comments atop worker.c in the patch and commit
message to see further details about this patch. After this patch,
there is a follow-up patch to allow streaming and two-phase options
together which I feel needs some more review and can be committed
separately.

FYI - I repeated the same verification of the v96-0001 patch as I did
previously for v95-0001

- The v96 patch applied cleanly and all build / testing was OK.
- The documentation also builds OK.
- I checked the v95-0001 / v96-0001 differences and found no problems.
- Furthermore, I noted that v96-0001 patch is passing the cfbot.

LGTM.

Pushed.

Feel free to submit the remaining patches after rebase. Is it possible
to post patches related to skipping empty transactions in the other
thread [1] where that topic is being discussed?

[1] - /messages/by-id/CAMkU=1yohp9-dv48FLoSPrMqYEyyS5ZWkaZGD41RJr10xiNo_Q@mail.gmail.com

Please find attached the latest patch set v97*

* Rebased v94* to HEAD @ today.

Thanks for the updated patch, the patch applies cleanly and test passes:
I had couple of comments:
1) Should we include "stream_prepare_cb" here in
logicaldecoding-streaming section of logicaldecoding.sgml
documentation:
To reduce the apply lag caused by large transactions, an output plugin
may provide additional callback to support incremental streaming of
in-progress transactions. There are multiple required streaming
callbacks (stream_start_cb, stream_stop_cb, stream_abort_cb,
stream_commit_cb and stream_change_cb) and two optional callbacks
(stream_message_cb and stream_truncate_cb).

2) Should we add an example for stream_prepare_cb here in
logicaldecoding-streaming section of logicaldecoding.sgml
documentation:
One example sequence of streaming callback calls for one transaction
may look like this:

stream_start_cb(...); <-- start of first block of changes
stream_change_cb(...);
stream_change_cb(...);
stream_message_cb(...);
stream_change_cb(...);
...
stream_change_cb(...);
stream_stop_cb(...); <-- end of first block of changes

stream_start_cb(...); <-- start of second block of changes
stream_change_cb(...);
stream_change_cb(...);
stream_change_cb(...);
...
stream_message_cb(...);
stream_change_cb(...);
stream_stop_cb(...); <-- end of second block of changes

stream_commit_cb(...); <-- commit of the streamed transaction

Regards,
Vignesh

#383Tom Lane
Tom Lane
tgl@sss.pgh.pa.us
In reply to: Amit Kapila (#380)

Amit Kapila <amit.kapila16@gmail.com> writes:

Pushed.

Coverity thinks this has security issues, and I agree.

/srv/coverity/git/pgsql-git/postgresql/src/backend/replication/logical/proto.c: 144 in logicalrep_read_begin_prepare()
143 /* read gid (copy it into a pre-allocated buffer) */

CID 1487517: Security best practices violations (STRING_OVERFLOW)
You might overrun the 200-character fixed-size string "begin_data->gid" by copying the return value of "pq_getmsgstring" without checking the length.

144 strcpy(begin_data->gid, pq_getmsgstring(in));

200 /* read gid (copy it into a pre-allocated buffer) */

CID 1487515: Security best practices violations (STRING_OVERFLOW)
You might overrun the 200-character fixed-size string "prepare_data->gid" by copying the return value of "pq_getmsgstring" without checking the length.

201 strcpy(prepare_data->gid, pq_getmsgstring(in));

256 /* read gid (copy it into a pre-allocated buffer) */

CID 1487516: Security best practices violations (STRING_OVERFLOW)
You might overrun the 200-character fixed-size string "prepare_data->gid" by copying the return value of "pq_getmsgstring" without checking the length.

257 strcpy(prepare_data->gid, pq_getmsgstring(in));

316 /* read gid (copy it into a pre-allocated buffer) */

CID 1487519: Security best practices violations (STRING_OVERFLOW)
You might overrun the 200-character fixed-size string "rollback_data->gid" by copying the return value of "pq_getmsgstring" without checking the length.

317 strcpy(rollback_data->gid, pq_getmsgstring(in));

I think you'd be way better off making the gid fields be "char *"
and pstrdup'ing the result of pq_getmsgstring. Another possibility
perhaps is to use strlcpy, but I'd only go that way if it's important
to constrain the received strings to 200 bytes.

regards, tom lane

#384Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Tom Lane (#383)

On Mon, Jul 19, 2021 at 1:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Amit Kapila <amit.kapila16@gmail.com> writes:

Pushed.

I think you'd be way better off making the gid fields be "char *"
and pstrdup'ing the result of pq_getmsgstring. Another possibility
perhaps is to use strlcpy, but I'd only go that way if it's important
to constrain the received strings to 200 bytes.

I think it is important to constrain length to 200 bytes for this case
as here we receive a prepared transaction identifier which according
to docs [1]https://www.postgresql.org/docs/devel/sql-prepare-transaction.html has a max length of 200 bytes. Also, in
ParseCommitRecord() and ParseAbortRecord(), we are using strlcpy with
200 as max length to copy prepare transaction identifier. So, I think
it is better to use strlcpy here unless you or Peter feels otherwise.

[1]: https://www.postgresql.org/docs/devel/sql-prepare-transaction.html

--
With Regards,
Amit Kapila.

#385Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#384)
1 attachment(s)

On Mon, Jul 19, 2021 at 12:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Jul 19, 2021 at 1:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Amit Kapila <amit.kapila16@gmail.com> writes:

Pushed.

I think you'd be way better off making the gid fields be "char *"
and pstrdup'ing the result of pq_getmsgstring. Another possibility
perhaps is to use strlcpy, but I'd only go that way if it's important
to constrain the received strings to 200 bytes.

I think it is important to constrain length to 200 bytes for this case
as here we receive a prepared transaction identifier which according
to docs [1] has a max length of 200 bytes. Also, in
ParseCommitRecord() and ParseAbortRecord(), we are using strlcpy with
200 as max length to copy prepare transaction identifier. So, I think
it is better to use strlcpy here unless you or Peter feels otherwise.

OK. I have implemented this reported [1]/messages/by-id/161029.1626639923@sss.pgh.pa.us potential buffer overrun
using the constraining strlcpy, because the GID limitation of 200
bytes is already mentioned in the documentation [2]https://www.postgresql.org/docs/devel/sql-prepare-transaction.html.

PSA.

------
[1]: /messages/by-id/161029.1626639923@sss.pgh.pa.us
[2]: https://www.postgresql.org/docs/devel/sql-prepare-transaction.html

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v1-0001-Fix-potential-buffer-overruns.patchapplication/octet-stream; name=v1-0001-Fix-potential-buffer-overruns.patch
#386Greg Nancarrow
Greg Nancarrow
gregn4422@gmail.com
In reply to: Peter Smith (#381)

On Wed, Jul 14, 2021 at 6:33 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v97*

I couldn't spot spot any significant issues in the v97-0001 patch, but
do have the following trivial feedback comments:

(1) doc/src/sgml/protocol.sgml
Suggestion:

BEFORE:
+   contains a Stream Prepare or Stream Commit or Stream Abort message.
AFTER:
+   contains a Stream Prepare, Stream Commit or Stream Abort message.

(2) src/backend/replication/logical/worker.c
It seems a bit weird to add a forward declaration here, without a
comment, like for the one immediately above it

/* Compute GID for two_phase transactions */
static void TwoPhaseTransactionGid(Oid subid, TransactionId xid, char
*gid, int szgid);
-
+static int apply_spooled_messages(TransactionId xid, XLogRecPtr lsn);

(3) src/backend/replication/logical/worker.c
Other DEBUG1 messages don't end with "."

+ elog(DEBUG1, "apply_handle_stream_prepare: replayed %d
(all) changes.", nchanges);

Regards,
Greg Nancarrow
Fujitsu Australia

#387Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#385)

On Mon, Jul 19, 2021 at 9:19 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Mon, Jul 19, 2021 at 12:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Jul 19, 2021 at 1:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Amit Kapila <amit.kapila16@gmail.com> writes:

Pushed.

I think you'd be way better off making the gid fields be "char *"
and pstrdup'ing the result of pq_getmsgstring. Another possibility
perhaps is to use strlcpy, but I'd only go that way if it's important
to constrain the received strings to 200 bytes.

I think it is important to constrain length to 200 bytes for this case
as here we receive a prepared transaction identifier which according
to docs [1] has a max length of 200 bytes. Also, in
ParseCommitRecord() and ParseAbortRecord(), we are using strlcpy with
200 as max length to copy prepare transaction identifier. So, I think
it is better to use strlcpy here unless you or Peter feels otherwise.

OK. I have implemented this reported [1] potential buffer overrun
using the constraining strlcpy, because the GID limitation of 200
bytes is already mentioned in the documentation [2].

This will work but I think it is better to use sizeof gid buffer as we
are using in ParseCommitRecord() and ParseAbortRecord(). Tomorrow, if
due to some unforeseen reason if we change the size of gid buffer to
be different than the GIDSIZE then it will work seamlessly.

--
With Regards,
Amit Kapila.

#388Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#387)
1 attachment(s)

On Mon, Jul 19, 2021 at 4:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Jul 19, 2021 at 9:19 AM Peter Smith <smithpb2250@gmail.com> wrote:

On Mon, Jul 19, 2021 at 12:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Jul 19, 2021 at 1:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Amit Kapila <amit.kapila16@gmail.com> writes:

Pushed.

I think you'd be way better off making the gid fields be "char *"
and pstrdup'ing the result of pq_getmsgstring. Another possibility
perhaps is to use strlcpy, but I'd only go that way if it's important
to constrain the received strings to 200 bytes.

I think it is important to constrain length to 200 bytes for this case
as here we receive a prepared transaction identifier which according
to docs [1] has a max length of 200 bytes. Also, in
ParseCommitRecord() and ParseAbortRecord(), we are using strlcpy with
200 as max length to copy prepare transaction identifier. So, I think
it is better to use strlcpy here unless you or Peter feels otherwise.

OK. I have implemented this reported [1] potential buffer overrun
using the constraining strlcpy, because the GID limitation of 200
bytes is already mentioned in the documentation [2].

This will work but I think it is better to use sizeof gid buffer as we
are using in ParseCommitRecord() and ParseAbortRecord(). Tomorrow, if
due to some unforeseen reason if we change the size of gid buffer to
be different than the GIDSIZE then it will work seamlessly.

Modified as requested. PSA patch v2.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v2-0001-Fix-potential-buffer-overruns.patchapplication/octet-stream; name=v2-0001-Fix-potential-buffer-overruns.patch
#389Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#388)

On Mon, Jul 19, 2021 at 1:00 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Mon, Jul 19, 2021 at 4:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

OK. I have implemented this reported [1] potential buffer overrun
using the constraining strlcpy, because the GID limitation of 200
bytes is already mentioned in the documentation [2].

This will work but I think it is better to use sizeof gid buffer as we
are using in ParseCommitRecord() and ParseAbortRecord(). Tomorrow, if
due to some unforeseen reason if we change the size of gid buffer to
be different than the GIDSIZE then it will work seamlessly.

Modified as requested. PSA patch v2.

LGTM. I'll push this tomorrow unless Tom or someone else has any comments.

--
With Regards,
Amit Kapila.

#390Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#389)
1 attachment(s)

Please find attached the latest patch set v98*

Patches:

v97-0001 --> v98-0001

Differences:

* Rebased to HEAD @ yesterday.

* Code/Docs changes:

1. Fixed the same strcpy problem as reported by Tom Lane [1]/messages/by-id/161029.1626639923@sss.pgh.pa.us for the
previous 2PC patch.

2. Addressed all feedback suggestions given by Greg [2]/messages/by-id/CAJcOf-ckGONzyAj0Y70ju_tfLWF819JYb=dv9p5AnoZxm50j0g@mail.gmail.com.

3. Added some more documentation as suggested by Vignesh [3]/messages/by-id/CALDaNm0LVY5A98xrgaodynnj6c=WQ5=ZMpauC44aRio7-jWBYQ@mail.gmail.com.

----
[1]: /messages/by-id/161029.1626639923@sss.pgh.pa.us
[2]: /messages/by-id/CAJcOf-ckGONzyAj0Y70ju_tfLWF819JYb=dv9p5AnoZxm50j0g@mail.gmail.com
[3]: /messages/by-id/CALDaNm0LVY5A98xrgaodynnj6c=WQ5=ZMpauC44aRio7-jWBYQ@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v98-0001-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v98-0001-Add-prepare-API-support-for-streaming-transactio.patch
#391Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Greg Nancarrow (#386)

On Mon, Jul 19, 2021 at 3:28 PM Greg Nancarrow <gregn4422@gmail.com> wrote:

On Wed, Jul 14, 2021 at 6:33 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v97*

I couldn't spot spot any significant issues in the v97-0001 patch, but
do have the following trivial feedback comments:

(1) doc/src/sgml/protocol.sgml
Suggestion:

BEFORE:
+   contains a Stream Prepare or Stream Commit or Stream Abort message.
AFTER:
+   contains a Stream Prepare, Stream Commit or Stream Abort message.

(2) src/backend/replication/logical/worker.c
It seems a bit weird to add a forward declaration here, without a
comment, like for the one immediately above it

/* Compute GID for two_phase transactions */
static void TwoPhaseTransactionGid(Oid subid, TransactionId xid, char
*gid, int szgid);
-
+static int apply_spooled_messages(TransactionId xid, XLogRecPtr lsn);

(3) src/backend/replication/logical/worker.c
Other DEBUG1 messages don't end with "."

+ elog(DEBUG1, "apply_handle_stream_prepare: replayed %d
(all) changes.", nchanges);

Thanks for the feedback. All these are fixed as suggested in v98.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#392Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#382)

On Fri, Jul 16, 2021 at 4:08 PM vignesh C <vignesh21@gmail.com> wrote:

[...]

Thanks for the updated patch, the patch applies cleanly and test passes:
I had couple of comments:
1) Should we include "stream_prepare_cb" here in
logicaldecoding-streaming section of logicaldecoding.sgml
documentation:
To reduce the apply lag caused by large transactions, an output plugin
may provide additional callback to support incremental streaming of
in-progress transactions. There are multiple required streaming
callbacks (stream_start_cb, stream_stop_cb, stream_abort_cb,
stream_commit_cb and stream_change_cb) and two optional callbacks
(stream_message_cb and stream_truncate_cb).

Modified in v98. The information about 'stream_prepare_cb' and friends
is given in detail in section 49.10 so I added a link to that page.

2) Should we add an example for stream_prepare_cb here in
logicaldecoding-streaming section of logicaldecoding.sgml
documentation:
One example sequence of streaming callback calls for one transaction
may look like this:

stream_start_cb(...); <-- start of first block of changes
stream_change_cb(...);
stream_change_cb(...);
stream_message_cb(...);
stream_change_cb(...);
...
stream_change_cb(...);
stream_stop_cb(...); <-- end of first block of changes

stream_start_cb(...); <-- start of second block of changes
stream_change_cb(...);
stream_change_cb(...);
stream_change_cb(...);
...
stream_message_cb(...);
stream_change_cb(...);
stream_stop_cb(...); <-- end of second block of changes

stream_commit_cb(...); <-- commit of the streamed transaction

Modified in v98. I felt it would be too verbose to add another full
example since it would be 90% the same as the current example. So I
have combined the information.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#393Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#390)

On Tue, Jul 20, 2021 at 9:24 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v98*

Review comments:
================
1.
/*
- * Handle STREAM COMMIT message.
+ * Common spoolfile processing.
+ * Returns how many changes were applied.
  */
-static void
-apply_handle_stream_commit(StringInfo s)
+static int
+apply_spooled_messages(TransactionId xid, XLogRecPtr lsn)

Let's extract this common functionality (common to current code and
the patch) as a separate patch? I think we can commit this as a
separate patch.

2.
apply_spooled_messages()
{
..
elog(DEBUG1, "replayed %d (all) changes from file \"%s\"",
nchanges, path);
..
}

You have this DEBUG1 message in apply_spooled_messages and its
callers. You can remove it from callers as the patch already has
another debug message to indicate whether it is stream prepare or
stream commit. Also, if this is the only reason to return nchanges
from apply_spooled_messages() then we can get rid of that as well.

3.
+ /*
+ * 2. Mark the transaction as prepared. - Similar code as for
+ * apply_handle_prepare (i.e. two-phase non-streamed prepare)
+ */
+
+ /*
+ * BeginTransactionBlock is necessary to balance the EndTransactionBlock
+ * called within the PrepareTransactionBlock below.
+ */
+ BeginTransactionBlock();
+ CommitTransactionCommand(); /* Completes the preceding Begin command. */
+
+ /*
+ * Update origin state so we can restart streaming from correct position
+ * in case of crash.
+ */
+ replorigin_session_origin_lsn = prepare_data.end_lsn;
+ replorigin_session_origin_timestamp = prepare_data.prepare_time;
+
+ PrepareTransactionBlock(gid);

I think you can move this part into a common function
apply_handle_prepare_internal. If that is possible then you might want
to move this part into a common functionality patch as mentioned in
point-1.

4.
+ xid = logicalrep_read_stream_prepare(s, &prepare_data);
+ elog(DEBUG1, "received prepare for streamed transaction %u", xid);

It is better to have an empty line between the above code lines for
the sake of clarity.

5.
+/* Commit (and abort) information */
typedef struct LogicalRepCommitData

How is this structure related to abort? Even if it is, why this
comment belongs to this patch?

6. Most of the code in logicalrep_write_stream_prepare() and
logicalrep_write_prepare() is same except for message. I think if we
want we can handle both of them with a single message by setting some
flag for stream case but probably there will be some additional
checking required on the worker-side. What do you think? I think if we
want to keep them separate then at least we should keep the common
functionality in logicalrep_write_*/logicalrep_read_* in separate
functions. This way we will avoid minor inconsistencies in-stream and
non-stream functions.

7.
+++ b/doc/src/sgml/protocol.sgml
@@ -2881,7 +2881,7 @@ The commands accepted in replication mode are:
    Begin Prepare and Prepare messages belong to the same transaction.
    It also sends changes of large in-progress transactions between a pair of
    Stream Start and Stream Stop messages. The last stream of such a transaction
-   contains a Stream Commit or Stream Abort message.
+   contains a Stream Prepare, Stream Commit or Stream Abort message.

I am not sure if it is correct to mention Stream Prepare here because
after that we will send commit prepared as well for such a
transaction. So, I think we should remove this change.

8.
-ALTER SUBSCRIPTION regress_testsub SET (slot_name = NONE);
-
\dRs+

+ALTER SUBSCRIPTION regress_testsub SET (slot_name = NONE);

Is there a reason for this change in the tests?

9.
I think this contains a lot of streaming tests in 023_twophase_stream.
Let's keep just one test for crash-restart scenario (+# Check that 2PC
COMMIT PREPARED is decoded properly on crash restart.) where both
publisher and subscriber get restarted. I think others are covered in
one or another way by other existing tests. Apart from that, I also
don't see the need for the below tests:
# Do DELETE after PREPARE but before COMMIT PREPARED.
This is mostly the same as the previous test where the patch is testing Insert
# Try 2PC transaction works using an empty GID literal
This is covered in 021_twophase.

10.
+++ b/src/test/subscription/t/024_twophase_cascade_stream.pl
@@ -0,0 +1,271 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# Test cascading logical replication of 2PC.

In the above comment, you might want to say something about streaming.
In general, I am not sure if it is really adding value to have these
many streaming tests for cascaded setup and doing the whole setup
again after we have done in tests 022_twophase_cascade. I think it is
sufficient to do just one or two streaming tests by enhancing
022_twophase_cascade, you can alter subscription to enable streaming
after doing non-streaming tests.

11. Have you verified that all these tests went through the streaming
code path? If not, you can once enable DEBUG message in
apply_handle_stream_prepare() and see if all tests hit that.

--
With Regards,
Amit Kapila.

#394Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#393)

On Fri, Jul 23, 2021 at 8:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Jul 20, 2021 at 9:24 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v98*

Review comments:
================

[...]

With Regards,
Amit Kapila.

Thanks for your review comments.

I having been working through them today and hope to post the v99*
patches tomorrow.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#395Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#393)

On Fri, Jul 23, 2021 at 8:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Jul 20, 2021 at 9:24 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v98*

Review comments:
================

All the following review comments are addressed in v99* patch set.

1.
/*
- * Handle STREAM COMMIT message.
+ * Common spoolfile processing.
+ * Returns how many changes were applied.
*/
-static void
-apply_handle_stream_commit(StringInfo s)
+static int
+apply_spooled_messages(TransactionId xid, XLogRecPtr lsn)

Let's extract this common functionality (common to current code and
the patch) as a separate patch? I think we can commit this as a
separate patch.

Done. Split patches as requested.

2.
apply_spooled_messages()
{
..
elog(DEBUG1, "replayed %d (all) changes from file \"%s\"",
nchanges, path);
..
}

You have this DEBUG1 message in apply_spooled_messages and its
callers. You can remove it from callers as the patch already has
another debug message to indicate whether it is stream prepare or
stream commit. Also, if this is the only reason to return nchanges
from apply_spooled_messages() then we can get rid of that as well.

Done.

3.
+ /*
+ * 2. Mark the transaction as prepared. - Similar code as for
+ * apply_handle_prepare (i.e. two-phase non-streamed prepare)
+ */
+
+ /*
+ * BeginTransactionBlock is necessary to balance the EndTransactionBlock
+ * called within the PrepareTransactionBlock below.
+ */
+ BeginTransactionBlock();
+ CommitTransactionCommand(); /* Completes the preceding Begin command. */
+
+ /*
+ * Update origin state so we can restart streaming from correct position
+ * in case of crash.
+ */
+ replorigin_session_origin_lsn = prepare_data.end_lsn;
+ replorigin_session_origin_timestamp = prepare_data.prepare_time;
+
+ PrepareTransactionBlock(gid);

I think you can move this part into a common function
apply_handle_prepare_internal. If that is possible then you might want
to move this part into a common functionality patch as mentioned in
point-1.

Done. (The common function is included in patch 0001)

4.
+ xid = logicalrep_read_stream_prepare(s, &prepare_data);
+ elog(DEBUG1, "received prepare for streamed transaction %u", xid);

It is better to have an empty line between the above code lines for
the sake of clarity.

Done.

5.
+/* Commit (and abort) information */
typedef struct LogicalRepCommitData

How is this structure related to abort? Even if it is, why this
comment belongs to this patch?

OK. Removed this from the patch.

6. Most of the code in logicalrep_write_stream_prepare() and
logicalrep_write_prepare() is same except for message. I think if we
want we can handle both of them with a single message by setting some
flag for stream case but probably there will be some additional
checking required on the worker-side. What do you think? I think if we
want to keep them separate then at least we should keep the common
functionality in logicalrep_write_*/logicalrep_read_* in separate
functions. This way we will avoid minor inconsistencies in-stream and
non-stream functions.

Done. (The common functions are included in patch 0001).

7.
+++ b/doc/src/sgml/protocol.sgml
@@ -2881,7 +2881,7 @@ The commands accepted in replication mode are:
Begin Prepare and Prepare messages belong to the same transaction.
It also sends changes of large in-progress transactions between a pair of
Stream Start and Stream Stop messages. The last stream of such a transaction
-   contains a Stream Commit or Stream Abort message.
+   contains a Stream Prepare, Stream Commit or Stream Abort message.

I am not sure if it is correct to mention Stream Prepare here because
after that we will send commit prepared as well for such a
transaction. So, I think we should remove this change.

Done.

8.
-ALTER SUBSCRIPTION regress_testsub SET (slot_name = NONE);
-
\dRs+

+ALTER SUBSCRIPTION regress_testsub SET (slot_name = NONE);

Is there a reason for this change in the tests?

Yes, the setting of slot_name = NONE really belongs with the DROP
SUBSCRIPTION. Similarly, the \dRs+ is done to test the effect of the
setting of the streaming option (not the slot_name = NONE). Since I
needed to add a new DROP SUBSCRIPTION (because now the streaming
option works) so I also refactored this exiting test to make all the
test formats consistent.

9.
I think this contains a lot of streaming tests in 023_twophase_stream.
Let's keep just one test for crash-restart scenario (+# Check that 2PC
COMMIT PREPARED is decoded properly on crash restart.) where both
publisher and subscriber get restarted. I think others are covered in
one or another way by other existing tests. Apart from that, I also
don't see the need for the below tests:
# Do DELETE after PREPARE but before COMMIT PREPARED.
This is mostly the same as the previous test where the patch is testing Insert
# Try 2PC transaction works using an empty GID literal
This is covered in 021_twophase.

Done. Removed all the excessive tests as you suggested.

10.
+++ b/src/test/subscription/t/024_twophase_cascade_stream.pl
@@ -0,0 +1,271 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# Test cascading logical replication of 2PC.

In the above comment, you might want to say something about streaming.
In general, I am not sure if it is really adding value to have these
many streaming tests for cascaded setup and doing the whole setup
again after we have done in tests 022_twophase_cascade. I think it is
sufficient to do just one or two streaming tests by enhancing
022_twophase_cascade, you can alter subscription to enable streaming
after doing non-streaming tests.

Done. Remove the 024 TAP tests, and instead merged the streaming
cascade tests into the 022_twophase_casecase.pl as you suggested.

11. Have you verified that all these tests went through the streaming
code path? If not, you can once enable DEBUG message in
apply_handle_stream_prepare() and see if all tests hit that.

Yeah, it was done a very long time ago when the tests were first
written; Anyway, just to be certain I temporarily modified the code as
suggested and confirmed by the logfiles that the tests is running
through apply_handle_stream_prepare.

------
Kind Regards,
Peter Smith.
Fujitsu Australia.

#396Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#395)
2 attachment(s)

Please find attached the latest patch set v99*

v98-0001 --> split into v99-0001 + v99-0002

Differences:

* Rebased to HEAD @ yesterday.

* Addresses review comments from Amit [1]/messages/by-id/CAA4eK1+izpAybqpEFp8+Rx=C1Z1H_XLcRod_WYjBRv2Rn+DO2w@mail.gmail.com and split the v98 patch as requested.

----
[1]: /messages/by-id/CAA4eK1+izpAybqpEFp8+Rx=C1Z1H_XLcRod_WYjBRv2Rn+DO2w@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v99-0001-Refactor-to-make-common-functions.patchapplication/octet-stream; name=v99-0001-Refactor-to-make-common-functions.patch
v99-0002-Add-prepare-API-support-for-streaming-transactio.patchapplication/octet-stream; name=v99-0002-Add-prepare-API-support-for-streaming-transactio.patch
#397Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#396)

On Tue, Jul 27, 2021 at 11:41 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v99*

v98-0001 --> split into v99-0001 + v99-0002

Pushed the first refactoring patch after making few modifications as below.
1.
- /* open the spool file for the committed transaction */
+ /* Open the spool file for the committed/prepared transaction */
  changes_filename(path, MyLogicalRepWorker->subid, xid);

In the above comment, we don't need to say prepared. It can be done as
part of the second patch.

2.
+apply_handle_prepare_internal(LogicalRepPreparedTxnData
*prepare_data, char *gid)

I don't think there is any need for this function to take gid as
input. It can compute by itself instead of callers doing it.

3.
+static TransactionId+logicalrep_read_prepare_common(StringInfo in,
char *msgtype,
+                               LogicalRepPreparedTxnData *prepare_data)

I don't think the above function needs to return xid because it is
already present as part of prepare_data. Even, if it is required due
to some reason for the second patch then let's do it as part of if but
I don't think it is required for the second patch.

4.
 /*
- * Write PREPARE to the output stream.
+ * Common code for logicalrep_write_prepare and
logicalrep_write_stream_prepare.
  */

Here and at a similar another place, we don't need to refer to
logicalrep_write_stream_prepare as that is part of the second patch.

Few comments on 0002 patch:
==========================
1.
+# ---------------------
+# 2PC + STREAMING TESTS
+# ---------------------
+
+# Setup logical replication (streaming = on)
+
+$node_B->safe_psql('postgres', "
+ ALTER SUBSCRIPTION tap_sub_B
+ SET (streaming = on);");
+
+$node_C->safe_psql('postgres', "
+ ALTER SUBSCRIPTION tap_sub_C
+ SET (streaming = on)");
+
+# Wait for subscribers to finish initialization
+$node_A->wait_for_catchup($appname_B);
+$node_B->wait_for_catchup($appname_C);

This is not the right way to determine if the new streaming option is
enabled on the publisher. Even if there is no restart of apply workers
(and walsender) after you have enabled the option, the above wait will
succeed. You need to do something like below as we are doing in
001_rep_changes.pl:

$oldpid = $node_publisher->safe_psql('postgres',
"SELECT pid FROM pg_stat_replication WHERE application_name = 'tap_sub';"
);
$node_subscriber->safe_psql('postgres',
"ALTER SUBSCRIPTION tap_sub SET PUBLICATION tap_pub_ins_only WITH
(copy_data = false)"
);
$node_publisher->poll_query_until('postgres',
"SELECT pid != $oldpid FROM pg_stat_replication WHERE application_name
= 'tap_sub';"
) or die "Timed out while waiting for apply to restart";

2.
+# Create some pre-existing content on publisher (uses same DDL as
015_stream test)

Here, in the comments, I don't see the need to same uses same DDL ...

--
With Regards,
Amit Kapila.

#398Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#397)
1 attachment(s)

Attachments:

v100-0001-Add-prepare-API-support-for-streaming-transacti.patchapplication/octet-stream; name=v100-0001-Add-prepare-API-support-for-streaming-transacti.patch
#399Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#397)

On Thu, Jul 29, 2021 at 9:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Jul 27, 2021 at 11:41 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v99*

v98-0001 --> split into v99-0001 + v99-0002

Pushed the first refactoring patch after making few modifications as below.
1.
- /* open the spool file for the committed transaction */
+ /* Open the spool file for the committed/prepared transaction */
changes_filename(path, MyLogicalRepWorker->subid, xid);

In the above comment, we don't need to say prepared. It can be done as
part of the second patch.

Updated comment in v100.

2.
+apply_handle_prepare_internal(LogicalRepPreparedTxnData
*prepare_data, char *gid)

I don't think there is any need for this function to take gid as
input. It can compute by itself instead of callers doing it.

OK.

3.
+static TransactionId+logicalrep_read_prepare_common(StringInfo in,
char *msgtype,
+                               LogicalRepPreparedTxnData *prepare_data)

I don't think the above function needs to return xid because it is
already present as part of prepare_data. Even, if it is required due
to some reason for the second patch then let's do it as part of if but
I don't think it is required for the second patch.

OK.

4.
/*
- * Write PREPARE to the output stream.
+ * Common code for logicalrep_write_prepare and
logicalrep_write_stream_prepare.
*/

Here and at a similar another place, we don't need to refer to
logicalrep_write_stream_prepare as that is part of the second patch.

Updated comment in v100

Few comments on 0002 patch:
==========================
1.
+# ---------------------
+# 2PC + STREAMING TESTS
+# ---------------------
+
+# Setup logical replication (streaming = on)
+
+$node_B->safe_psql('postgres', "
+ ALTER SUBSCRIPTION tap_sub_B
+ SET (streaming = on);");
+
+$node_C->safe_psql('postgres', "
+ ALTER SUBSCRIPTION tap_sub_C
+ SET (streaming = on)");
+
+# Wait for subscribers to finish initialization
+$node_A->wait_for_catchup($appname_B);
+$node_B->wait_for_catchup($appname_C);

This is not the right way to determine if the new streaming option is
enabled on the publisher. Even if there is no restart of apply workers
(and walsender) after you have enabled the option, the above wait will
succeed. You need to do something like below as we are doing in
001_rep_changes.pl:

$oldpid = $node_publisher->safe_psql('postgres',
"SELECT pid FROM pg_stat_replication WHERE application_name = 'tap_sub';"
);
$node_subscriber->safe_psql('postgres',
"ALTER SUBSCRIPTION tap_sub SET PUBLICATION tap_pub_ins_only WITH
(copy_data = false)"
);
$node_publisher->poll_query_until('postgres',
"SELECT pid != $oldpid FROM pg_stat_replication WHERE application_name
= 'tap_sub';"
) or die "Timed out while waiting for apply to restart";

Fixed in v100 as suggested.

2.
+# Create some pre-existing content on publisher (uses same DDL as
015_stream test)

Here, in the comments, I don't see the need to same uses same DDL ...

Fixed in v100. Comment removed.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#400vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#398)

On Fri, Jul 30, 2021 at 9:32 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v100*

v99-0002 --> v100-0001

Differences:

* Rebased to HEAD @ today (needed because some recent commits [1][2] broke v99)

The patch applies neatly, tests passes and documentation looks good.
A Few minor comments.
1) This blank line is not required:
+-- two_phase and streaming are compatible.
+CREATE SUBSCRIPTION regress_testsub CONNECTION
'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect =
false, streaming = true, two_phase = true);
+
2) Few points have punctuation mark and few don't have, we can make it
consistent:
+###############################
+# Test 2PC PREPARE / ROLLBACK PREPARED.
+# 1. Table is deleted back to 2 rows which are replicated on subscriber.
+# 2. Data is streamed using 2PC
+# 3. Do rollback prepared.
+#
+# Expect data rolls back leaving only the original 2 rows.
+###############################
3) similarly here too:
+###############################
+# Do INSERT after the PREPARE but before ROLLBACK PREPARED.
+# 1. Table is deleted back to 2 rows which are replicated on subscriber.
+# 2. Data is streamed using 2PC.
+# 3. A single row INSERT is done which is after the PREPARE
+# 4. Then do a ROLLBACK PREPARED.
+#
+# Expect the 2PC data rolls back leaving only 3 rows on the subscriber.
+# (the original 2 + inserted 1)
+###############################

Regards,
Vignesh

#401Greg Nancarrow
Greg Nancarrow
gregn4422@gmail.com
In reply to: Peter Smith (#398)

On Fri, Jul 30, 2021 at 2:02 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v100*

v99-0002 --> v100-0001

A few minor comments:

(1) doc/src/sgml/protocol.sgml

In the following description, is the word "large" really needed? Also
"the message ... for a ... message" sounds a bit odd, as does
"two-phase prepare".

What about the following:

BEFORE:
+                Identifies the message as a two-phase prepare for a
large in-progress transaction message.
AFTER:
+                Identifies the message as a prepare for an
in-progress two-phase transaction.

(2) src/backend/replication/logical/worker.c

Similar format comment, but one uses a full-stop and the other
doesn't, looks a bit odd, since the lines are near each other.

* 1. Replay all the spooled operations - Similar code as for

* 2. Mark the transaction as prepared. - Similar code as for

(3) src/test/subscription/t/023_twophase_stream.pl

Shouldn't the following comment mention, for example, "with streaming"
or something to that effect?

# logical replication of 2PC test

Regards,
Greg Nancarrow
Fujitsu Australia

#402tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
In reply to: Peter Smith (#398)
RE: [HACKERS] logical decoding of two-phase transactions

On Friday, July 30, 2021 12:02 PM Peter Smith <smithpb2250@gmail.com>wrote:

Please find attached the latest patch set v100*

v99-0002 --> v100-0001

Thanks for your patch. A few comments on the test file:

1. src/test/subscription/t/022_twophase_cascade.pl

1.1
I saw your test cases for "PREPARE / COMMIT PREPARED" and "PREPARE with a nested ROLLBACK TO SAVEPOINT", but didn't see cases for "PREPARE / ROLLBACK PREPARED". Is it needless or just missing?

1.2
+# check inserts are visible at subscriber(s).
+# All the streamed data (prior to the SAVEPOINT) should be rolled back.
+# (3, 'foobar') should be committed.

I think it should be (9999, 'foobar') here.

1.3
+$result = $node_B->safe_psql('postgres', "SELECT count(*) FROM test_tab where b = 'foobar';");
+is($result, qq(1), 'Rows committed are present on subscriber B');
+$result = $node_B->safe_psql('postgres', "SELECT count(*) FROM test_tab;");
+

It seems the test is not finished yet. We didn't check the value of 'result'. Besides, maybe we should also check node_C, right?

1.4
+$node_B->append_conf('postgresql.conf',	qq(max_prepared_transactions = 10));
+$node_B->append_conf('postgresql.conf', qq(logical_decoding_work_mem = 64kB));

You see, the first line uses a TAB but the second line uses a space.
Also, we could use only one statement to append these two settings to run tests a bit faster. Thoughts?
Something like:

$node_B->append_conf(
'postgresql.conf', qq(
max_prepared_transactions = 10
logical_decoding_work_mem = 64kB
));

Regards
Tang

#403Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#380)

On Wed, Jul 14, 2021 at 11:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Jul 12, 2021 at 9:14 AM Peter Smith <smithpb2250@gmail.com> wrote:

Pushed.

As reported by Michael [1]/messages/by-id/YQP02+5yLCIgmdJY@paquier.xyz, there is one test failure related to this
commit. The failure is as below:

# Failed test 'transaction is prepared on subscriber'
# at t/021_twophase.pl line 324.
# got: '1'
# expected: '2'
# Looks like you failed 1 test of 24.
[12:14:02] t/021_twophase.pl ..................
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/24 subtests
[12:14:12] t/022_twophase_cascade.pl .......... ok 10542 ms ( 0.00
usr 0.00 sys + 2.03 cusr 0.61 csys = 2.64 CPU)
[12:14:31] t/100_bugs.pl ...................... ok 18550 ms ( 0.00
usr 0.00 sys + 3.85 cusr 1.36 csys = 5.21 CPU)
[12:14:31]

I think I know what's going wrong here. The corresponding test is:

# Now do a prepare on publisher and check that it IS replicated
$node_publisher->safe_psql('postgres', "
BEGIN;
INSERT INTO tab_copy VALUES (99);
PREPARE TRANSACTION 'mygid';");

$node_publisher->wait_for_catchup($appname_copy);

# Check that the transaction has been prepared on the subscriber,
there will be 2
# prepared transactions for the 2 subscriptions.
$result = $node_subscriber->safe_psql('postgres', "SELECT count(*)
FROM pg_prepared_xacts;");
is($result, qq(2), 'transaction is prepared on subscriber');

Here, the test is expecting 2 prepared transactions corresponding to
two subscriptions but it waits for just one subscription via
appname_copy. It should wait for the second subscription using
$appname as well.

What do you think?

[1]: /messages/by-id/YQP02+5yLCIgmdJY@paquier.xyz

--
With Regards,
Amit Kapila.

#404Ajin Cherian
Ajin Cherian
itsajin@gmail.com
In reply to: Amit Kapila (#403)
1 attachment(s)

On Sat, Jul 31, 2021 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Here, the test is expecting 2 prepared transactions corresponding to
two subscriptions but it waits for just one subscription via
appname_copy. It should wait for the second subscription using
$appname as well.

What do you think?

I agree with this analysis. The test needs to wait for both
subscriptions to catch up.
Attached is a patch that addresses this issue.

regards,
Ajin Cherian
Fujitsu Australia

Attachments:

v1-0001-Fix-possible-failure-in-021_twophase-tap-test.patchapplication/octet-stream; name=v1-0001-Fix-possible-failure-in-021_twophase-tap-test.patch
#405Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Ajin Cherian (#404)

On Sat, Jul 31, 2021 at 11:12 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Sat, Jul 31, 2021 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Here, the test is expecting 2 prepared transactions corresponding to
two subscriptions but it waits for just one subscription via
appname_copy. It should wait for the second subscription using
$appname as well.

What do you think?

I agree with this analysis. The test needs to wait for both
subscriptions to catch up.
Attached is a patch that addresses this issue.

LGTM, unless Peter Smith has any comments or thinks otherwise, I'll
push this on Monday.

--
With Regards,
Amit Kapila.

#406Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#398)

On Fri, Jul 30, 2021 at 9:32 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v100*

Few minor comments:
1.
CREATE SUBSCRIPTION regress_testsub CONNECTION
'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect =
false, two_phase = true);

 \dRs+
+
 --fail - alter of two_phase option not supported.
 ALTER SUBSCRIPTION regress_testsub SET (two_phase = false);

Spurious line addition.

2.
+TransactionId
+logicalrep_read_stream_prepare(StringInfo in,
LogicalRepPreparedTxnData *prepare_data)
+{
+ logicalrep_read_prepare_common(in, "stream prepare", prepare_data);
+
+ return prepare_data->xid;
+}

There is no need to return TransactionId separately. The caller can
use from prepare_data, if required.

3.
extern void logicalrep_read_stream_abort(StringInfo in, TransactionId *xid,
TransactionId *subxid);

+extern void logicalrep_write_stream_prepare(StringInfo out,
ReorderBufferTXN *txn,
+ XLogRecPtr prepare_lsn);
+extern TransactionId logicalrep_read_stream_prepare(StringInfo in,
+ LogicalRepPreparedTxnData *prepare_data);
+
+

Keep the order of declarations the same as its definitions in proto.c
which means move these after logicalrep_read_rollback_prepared() and
be careful about extra blank lines.

--
With Regards,
Amit Kapila.

#407vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Ajin Cherian (#404)

On Sat, Jul 31, 2021 at 11:12 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Sat, Jul 31, 2021 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Here, the test is expecting 2 prepared transactions corresponding to
two subscriptions but it waits for just one subscription via
appname_copy. It should wait for the second subscription using
$appname as well.

What do you think?

I agree with this analysis. The test needs to wait for both
subscriptions to catch up.
Attached is a patch that addresses this issue.

The changes look good to me.

Regards,
Vignesh

#408Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#407)

On Sun, Aug 1, 2021 at 3:05 AM vignesh C <vignesh21@gmail.com> wrote:

On Sat, Jul 31, 2021 at 11:12 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Sat, Jul 31, 2021 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Here, the test is expecting 2 prepared transactions corresponding to
two subscriptions but it waits for just one subscription via
appname_copy. It should wait for the second subscription using
$appname as well.

What do you think?

I agree with this analysis. The test needs to wait for both
subscriptions to catch up.
Attached is a patch that addresses this issue.

The changes look good to me.

The patch to the test code posted by Ajin LGTM also.

I applied the patch and re-ran the TAP subscription tests. All OK.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#409Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#399)
1 attachment(s)
#410Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#406)

On Sat, Jul 31, 2021 at 9:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Jul 30, 2021 at 9:32 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v100*

Few minor comments:
1.
CREATE SUBSCRIPTION regress_testsub CONNECTION
'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect =
false, two_phase = true);

\dRs+
+
--fail - alter of two_phase option not supported.
ALTER SUBSCRIPTION regress_testsub SET (two_phase = false);

Spurious line addition.

OK. Fixed in v101.

2.
+TransactionId
+logicalrep_read_stream_prepare(StringInfo in,
LogicalRepPreparedTxnData *prepare_data)
+{
+ logicalrep_read_prepare_common(in, "stream prepare", prepare_data);
+
+ return prepare_data->xid;
+}

There is no need to return TransactionId separately. The caller can
use from prepare_data, if required.

OK. Modified in v101

3.
extern void logicalrep_read_stream_abort(StringInfo in, TransactionId *xid,
TransactionId *subxid);

+extern void logicalrep_write_stream_prepare(StringInfo out,
ReorderBufferTXN *txn,
+ XLogRecPtr prepare_lsn);
+extern TransactionId logicalrep_read_stream_prepare(StringInfo in,
+ LogicalRepPreparedTxnData *prepare_data);
+
+

Keep the order of declarations the same as its definitions in proto.c
which means move these after logicalrep_read_rollback_prepared() and
be careful about extra blank lines.

OK. Reordered in v101.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#411Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: tanghy.fnst@fujitsu.com (#402)

On Fri, Jul 30, 2021 at 6:25 PM tanghy.fnst@fujitsu.com
<tanghy.fnst@fujitsu.com> wrote:

On Friday, July 30, 2021 12:02 PM Peter Smith <smithpb2250@gmail.com>wrote:

Please find attached the latest patch set v100*

v99-0002 --> v100-0001

Thanks for your patch. A few comments on the test file:

1. src/test/subscription/t/022_twophase_cascade.pl

1.1
I saw your test cases for "PREPARE / COMMIT PREPARED" and "PREPARE with a nested ROLLBACK TO SAVEPOINT", but didn't see cases for "PREPARE / ROLLBACK PREPARED". Is it needless or just missing?

Yes, that test used to exist but it was removed in response to a
previous review (see [1]/messages/by-id/CAHut+Pts_bWx_RrXu+YwbiJva33nTROoQQP5H4pVrF+NcCMkRA@mail.gmail.com comment #10, Amit said there were too many
tests).

1.2
+# check inserts are visible at subscriber(s).
+# All the streamed data (prior to the SAVEPOINT) should be rolled back.
+# (3, 'foobar') should be committed.

I think it should be (9999, 'foobar') here.

Good catch. Fixed in v101.

1.3
+$result = $node_B->safe_psql('postgres', "SELECT count(*) FROM test_tab where b = 'foobar';");
+is($result, qq(1), 'Rows committed are present on subscriber B');
+$result = $node_B->safe_psql('postgres', "SELECT count(*) FROM test_tab;");
+

It seems the test is not finished yet. We didn't check the value of 'result'. Besides, maybe we should also check node_C, right?

Oops. Thanks for finding this! Fixed in v101 by adding the missing tests.

1.4
+$node_B->append_conf('postgresql.conf',        qq(max_prepared_transactions = 10));
+$node_B->append_conf('postgresql.conf', qq(logical_decoding_work_mem = 64kB));

You see, the first line uses a TAB but the second line uses a space.
Also, we could use only one statement to append these two settings to run tests a bit faster. Thoughts?
Something like:

$node_B->append_conf(
'postgresql.conf', qq(
max_prepared_transactions = 10
logical_decoding_work_mem = 64kB
));

OK. In v101 I changed the config as you suggested for both the 022 and
023 TAP tests.

------
[1]: /messages/by-id/CAHut+Pts_bWx_RrXu+YwbiJva33nTROoQQP5H4pVrF+NcCMkRA@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia.

#412Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#400)

On Fri, Jul 30, 2021 at 3:18 PM vignesh C <vignesh21@gmail.com> wrote:

On Fri, Jul 30, 2021 at 9:32 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v100*

v99-0002 --> v100-0001

Differences:

* Rebased to HEAD @ today (needed because some recent commits [1][2] broke v99)

The patch applies neatly, tests passes and documentation looks good.
A Few minor comments.
1) This blank line is not required:
+-- two_phase and streaming are compatible.
+CREATE SUBSCRIPTION regress_testsub CONNECTION
'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect =
false, streaming = true, two_phase = true);
+

Fixed in v101.

2) Few points have punctuation mark and few don't have, we can make it
consistent:
+###############################
+# Test 2PC PREPARE / ROLLBACK PREPARED.
+# 1. Table is deleted back to 2 rows which are replicated on subscriber.
+# 2. Data is streamed using 2PC
+# 3. Do rollback prepared.
+#
+# Expect data rolls back leaving only the original 2 rows.
+###############################

Fixed in v101.

3) similarly here too:
+###############################
+# Do INSERT after the PREPARE but before ROLLBACK PREPARED.
+# 1. Table is deleted back to 2 rows which are replicated on subscriber.
+# 2. Data is streamed using 2PC.
+# 3. A single row INSERT is done which is after the PREPARE
+# 4. Then do a ROLLBACK PREPARED.
+#
+# Expect the 2PC data rolls back leaving only 3 rows on the subscriber.
+# (the original 2 + inserted 1)
+###############################

Fixed in v101.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#413Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Greg Nancarrow (#401)

On Fri, Jul 30, 2021 at 4:33 PM Greg Nancarrow <gregn4422@gmail.com> wrote:

On Fri, Jul 30, 2021 at 2:02 PM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v100*

v99-0002 --> v100-0001

A few minor comments:

(1) doc/src/sgml/protocol.sgml

In the following description, is the word "large" really needed? Also
"the message ... for a ... message" sounds a bit odd, as does
"two-phase prepare".

What about the following:

BEFORE:
+                Identifies the message as a two-phase prepare for a
large in-progress transaction message.
AFTER:
+                Identifies the message as a prepare for an
in-progress two-phase transaction.

Updated in v101.

The other nearby messages are referring refer to a “streamed
transaction” so I’ve changed this to say “Identifies the message as a
two-phase prepare for a streamed transaction message.” (e.g. compare
this text with the existing similar text for ‘P’).

BTW, I agree with you that "the message ... for a ... message" seems
odd; it was written in this way only to be consistent with existing
documentation, which all uses the same odd phrasing.

(2) src/backend/replication/logical/worker.c

Similar format comment, but one uses a full-stop and the other
doesn't, looks a bit odd, since the lines are near each other.

* 1. Replay all the spooled operations - Similar code as for

* 2. Mark the transaction as prepared. - Similar code as for

Updated in v101 to make the comments consistent.

(3) src/test/subscription/t/023_twophase_stream.pl

Shouldn't the following comment mention, for example, "with streaming"
or something to that effect?

# logical replication of 2PC test

Fixed as suggested in v101.

------
Kind Regards,
Peter Smith.
Fujitsu Australia.

#414Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#408)

On Sun, Aug 1, 2021 at 3:51 PM Peter Smith <smithpb2250@gmail.com> wrote:

On Sun, Aug 1, 2021 at 3:05 AM vignesh C <vignesh21@gmail.com> wrote:

On Sat, Jul 31, 2021 at 11:12 AM Ajin Cherian <itsajin@gmail.com> wrote:

On Sat, Jul 31, 2021 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

Here, the test is expecting 2 prepared transactions corresponding to
two subscriptions but it waits for just one subscription via
appname_copy. It should wait for the second subscription using
$appname as well.

What do you think?

I agree with this analysis. The test needs to wait for both
subscriptions to catch up.
Attached is a patch that addresses this issue.

The changes look good to me.

The patch to the test code posted by Ajin LGTM also.

Pushed.

--
With Regards,
Amit Kapila.

#415Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#409)
1 attachment(s)

Please find attached the latest patch set v102*

Differences:

* Rebased to HEAD @ today.

* This is a documentation change only. A recent commit [1]https://github.com/postgres/postgres/commit/a5cb4f9829fbfd68655543d2d371a18a8eb43b84 has changed
the documentation style for the message formats slightly to annotate
the data types. For consistency, the same style change needs to be
adopted for the newly added message of this patch. This same change
also finally addresses some old review comments [2]/messages/by-id/CALDaNm3U4fGxTnQfaT1TqUkgX5c0CSDvmW12Bfksis8zB_XinA@mail.gmail.com from Vignesh.

----
[1]: https://github.com/postgres/postgres/commit/a5cb4f9829fbfd68655543d2d371a18a8eb43b84
[2]: /messages/by-id/CALDaNm3U4fGxTnQfaT1TqUkgX5c0CSDvmW12Bfksis8zB_XinA@mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia

Attachments:

v102-0001-Add-prepare-API-support-for-streaming-transacti.patchapplication/octet-stream; name=v102-0001-Add-prepare-API-support-for-streaming-transacti.patch
#416Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#316)

On Mon, May 10, 2021 at 1:31 PM vignesh C <vignesh21@gmail.com> wrote:

...

2) I felt we can change lsn data type from Int64 to XLogRecPtr
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The LSN of the prepare.
+</para></listitem>
+</varlistentry>
+
+<varlistentry>
+<term>Int64</term>
+<listitem><para>
+                The end LSN of the transaction.
+</para></listitem>
+</varlistentry>
3) I felt we can change lsn data type from Int32 to TransactionId
+<varlistentry>
+<term>Int32</term>
+<listitem><para>
+                Xid of the subtransaction (will be same as xid of the
transaction for top-level
+                transactions).
+</para></listitem>
+</varlistentry>

...

Similar problems related to comments 2 and 3 are being discussed at
[1], we can change it accordingly based on the conclusion in the other
thread.
[1] - /messages/by-id/CAHut+Ps2JsSd_OpBR9kXt1Rt4bwyXAjh875gUpFw6T210ttO7Q@mail.gmail.com

Earlier today the other documentation patch mentioned above was
committed by Tom Lane.

The 2PC patch v102 now fixes your review comments 2 and 3 by matching
the same datatype annotation style of that commit.

------
Kind Regards,
Peter Smith
Fujitsu Australia

#417Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#415)
1 attachment(s)

On Tue, Aug 3, 2021 at 6:17 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v102*

I have made minor modifications in the comments and docs, please see
attached. Can you please check whether the names of contributors in
the commit message are correct or do we need to change it?

--
With Regards,
Amit Kapila.

Attachments:

v103-0001-Add-prepare-API-support-for-streaming-transacti.patchapplication/octet-stream; name=v103-0001-Add-prepare-API-support-for-streaming-transacti.patch
#418Peter Smith
Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#417)

On Tue, Aug 3, 2021 at 5:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Aug 3, 2021 at 6:17 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v102*

I have made minor modifications in the comments and docs, please see
attached. Can you please check whether the names of contributors in
the commit message are correct or do we need to change it?

I checked the differences between v102 and v103 and have no review
comments about the latest changes.

The commit message looks ok.

I applied the v103 to the current HEAD; no errors.
The build is ok.
The make check is ok.
The TAP subscription tests are ok.

I also rebuilt the PG docs and verified rendering of the updated pages looks ok.

The patch v103 LGTM.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#419vignesh C
vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#417)

On Tue, Aug 3, 2021 at 12:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Aug 3, 2021 at 6:17 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v102*

I have made minor modifications in the comments and docs, please see
attached. Can you please check whether the names of contributors in
the commit message are correct or do we need to change it?

The patch applies neatly, the tests pass and documentation built with
the updates provided. I could not find any comments. The patch looks
good to me.

Regards,
Vignesh

#420tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
In reply to: vignesh C (#419)
RE: [HACKERS] logical decoding of two-phase transactions

On Tuesday, August 3, 2021 6:03 PM vignesh C <vignesh21@gmail.com>wrote:

On Tue, Aug 3, 2021 at 12:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Aug 3, 2021 at 6:17 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v102*

I have made minor modifications in the comments and docs, please see
attached. Can you please check whether the names of contributors in
the commit message are correct or do we need to change it?

The patch applies neatly, the tests pass and documentation built with
the updates provided. I could not find any comments. The patch looks
good to me.

I did some stress tests on the patch and found no issues.
It also works well when using synchronized replication.
So the patch LGTM.

Regards
Tang

#421Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: tanghy.fnst@fujitsu.com (#420)

On Wed, Aug 4, 2021 at 6:51 AM tanghy.fnst@fujitsu.com
<tanghy.fnst@fujitsu.com> wrote:

On Tuesday, August 3, 2021 6:03 PM vignesh C <vignesh21@gmail.com>wrote:

On Tue, Aug 3, 2021 at 12:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Aug 3, 2021 at 6:17 AM Peter Smith <smithpb2250@gmail.com> wrote:

Please find attached the latest patch set v102*

I have made minor modifications in the comments and docs, please see
attached. Can you please check whether the names of contributors in
the commit message are correct or do we need to change it?

The patch applies neatly, the tests pass and documentation built with
the updates provided. I could not find any comments. The patch looks
good to me.

I did some stress tests on the patch and found no issues.
It also works well when using synchronized replication.
So the patch LGTM.

I have pushed this last patch in the series.

--
With Regards,
Amit Kapila.

#422Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#421)

On Wed, Aug 4, 2021 at 4:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have pushed this last patch in the series.

I have closed this CF entry. Thanks to everyone involved in this work!

--
With Regards,
Amit Kapila.

#423Masahiko Sawada
Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#422)

Hi,

On Mon, Aug 9, 2021 at 12:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Wed, Aug 4, 2021 at 4:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

I have pushed this last patch in the series.

I have closed this CF entry. Thanks to everyone involved in this work!

I have a questoin about two_phase column of pg_replication_slots view:
with this feature, pg_replication_slots has a new column two_phase:

View "pg_catalog.pg_replication_slots"
Column | Type | Collation | Nullable | Default
---------------------+---------+-----------+----------+---------
slot_name | name | | |
plugin | name | | |
slot_type | text | | |
datoid | oid | | |
database | name | | |
temporary | boolean | | |
active | boolean | | |
active_pid | integer | | |
xmin | xid | | |
catalog_xmin | xid | | |
restart_lsn | pg_lsn | | |
confirmed_flush_lsn | pg_lsn | | |
wal_status | text | | |
safe_wal_size | bigint | | |
two_phase | boolean | | |

According to the doc, the two_phase field has:

True if the slot is enabled for decoding prepared transactions. Always
false for physical slots.

It's unnatural a bit to me that replication slots have such a property
since the replication slots have been used to protect WAL and tuples
that are required for logical decoding, physical replication, and
backup, etc from removal. Also, it seems that even if a replication
slot is created with two_phase = off, it's overwritten to on if the
plugin enables two-phase option. Is there any reason why we can turn
on and off this value on the replication slot side and is there any
use case where the replication slot’s two_phase is false and the
plugin’s two-phase option is on and vice versa? I think that we can
have replication slots always have two_phase_at value and remove the
two_phase field from the view.

Regards,

--
Masahiko Sawada
EDB: https://www.enterprisedb.com/

#424Amit Kapila
Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#423)

On Tue, Jan 4, 2022 at 9:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

According to the doc, the two_phase field has:

True if the slot is enabled for decoding prepared transactions. Always
false for physical slots.

It's unnatural a bit to me that replication slots have such a property
since the replication slots have been used to protect WAL and tuples
that are required for logical decoding, physical replication, and
backup, etc from removal. Also, it seems that even if a replication
slot is created with two_phase = off, it's overwritten to on if the
plugin enables two-phase option. Is there any reason why we can turn
on and off this value on the replication slot side and is there any
use case where the replication slot’s two_phase is false and the
plugin’s two-phase option is on and vice versa?

We enable two_phase only when we start streaming from the
subscriber-side. This is required because we can't enable it till the
initial sync is complete, otherwise, it could lead to loss of data.
See comments atop worker.c (description under the title: TWO_PHASE
TRANSACTIONS).

I think that we can
have replication slots always have two_phase_at value and remove the
two_phase field from the view.

I am not sure how that will work because we can allow streaming of
prepared transactions when the same is enabled at the CREATE
SUBSCRIPTION time, the default for which is false.

--
With Regards,
Amit Kapila.