logical decoding and replication of sequences
Hi,
One of the existing limitations of logical decoding / replication is
that it does no care about sequences. The annoying consequence is that
after a failover to logical replica, all the table data may be
replicated but the sequences are still at the initial values, requiring
some custom solution that moves the sequences forward enough to prevent
duplicities.
There have been attempts to address this in the past, most recently [1]/messages/by-id/1710ed7e13b.cd7177461430746.3372264562543607781@highgo.ca,
but none of them got in due to various issues.
This is an attempt, based on [1]/messages/by-id/1710ed7e13b.cd7177461430746.3372264562543607781@highgo.ca (but with many significant parts added
or reworked), aiming to deal with this. The primary purpose of sharing
it is getting feedback and opinions on the design decisions. It's still
a WIP - it works fine AFAICS, but some of the bits may be a bit hackish.
The overall goal is to have the same sequence data on the primary and
logical replica, or something sufficiently close to that, so that the
replica after a failover does not generate duplicate values.
This patch does a couple basic things:
1) extends the logical decoding to handle sequences. It adds a new
callback, similarly to what we have for messages. There's a bit of
complexity with transactional and non-transactional behavior, more
about that later
2) extends test_decoding to support this new callback, printing the
sequence increments (the decoded WAL records)
3) extends built-in replication to support sequences, so publications
may contain both tables and sequences, etc., sequences data sync
when creating subscriptions, etc.
transactional vs. non-transactional
-----------------------------------
The first part (extending logical decoding) is simple in principle. We
simply decode the sequence updates, but then comes a challenge - should
we just treat it transactionally and stash it in reorder buffer, or
just pass it to the output plugin right-away?
For messages, this can be specified as a flag when adding the message,
so the user can decide depending on the message purpose. For sequences,
all we do is nextval() and it depends on the context in which it's used,
we can't just pick one of those approaches.
Consider this, for example:
CREATE SEQUENCE s;
BEGIN;
SELECT nextval('s') FROM generate_series(1,1000) s(i);
ROLLBACK;
If we handle this "transactionally", we'd stash the "nextval" increment
into the transaction, and then discard it due to the rollback, so the
output plugin (and replica) would never get it. So this is an argument
for non-transactional behavior.
On the other hand, consider this:
CREATE SEQUENCE s;
BEGIN;
ALTER SEQUENCE s RESTART WITH 2000;
SELECT nextval('s') FROM generate_series(1,1000) s(i);
ROLLBACK;
In this case the ALTER creates a new relfilenode, and the ROLLBACK does
discard it including the effects of the nextval calls. So here we should
treat it transactionally, stash the increment(s) in the transaction and
just discard it all on rollback.
A somewhat similar example is this
BEGIN;
CREATE SEQUENCE s;
SELECT nextval('s') FROM generate_series(1,1000) s(i);
COMMIT;
Again - the decoded nextval needs to be handled transactionally, because
otherwise it's going to be very difficult for custom plugins to combine
this with DDL replication.
So the patch does a fairly simple thing:
1) By default, sequences are treated non-transactionally, i.e. sent to
the output plugin right away.
2) We track sequences created in running (sub)transactions, and those
are handled transactionally. This includes ALTER SEQUENCE cases,
which create a new relfilenode, which is used as an identifier.
It's a bit more complex, because of cases like this:
BEGIN;
CREATE SEQUENCE s;
SAVEPOINT a;
SELECT nextval('s') FROM generate_series(1,1000) s(i);
ROLLBACK TO a;
COMMIT;
because we must not discard the nextval changes - this is handled by
always stashing the nextval changes to the subxact where the sequence
relfilenode was created.
The tracking is a bit cumbersome - there's a hash table with relfilenode
mapped to XID in which it was created. AFAIK that works, but might be
an issue with many sequences created in running transactions. Not sure.
detecting sequence creation
---------------------------
Detection that a sequence (or rather the relfilenode) was created is
done by adding a "created" flag into the xl_seq_rec, and setting it to
"true" in the first WAL record after the creation. There might be some
other way, but this seemed simple enough.
applying the sequence (ResetSequence2)
--------------------------------------
The decoding pretty much just extracts log_value, log_cnt and is_called
from the sequence, and passes them to the output plugin. On the replica
we extract those from the message, and write them to the local sequence
using a new ResetSequence2 function.
It's possible we don't really need log_cnt and is_called. After all,
log_cnt is zero most of the time anyway, and the worst thing that could
happen if we ignore it is we skip a couple values (which seems fine).
syncing sequences in a subscription
-----------------------------------
After creating a subscription, the sequences get syncronized just like
tables. This part ia a bit hacked together, and there's definitely room
for improvement - e.g. a new bgworker is started for each sequence, as
we simply treat both tabels and sequences as "relation". But all we need
to do for sequences is copying the (last_value, log_cnt, is_called) and
calling ResetSequence2, so maybe we could sync all sequences in a single
worker, or something like that.
new "sequence" publication action
---------------------------------
The publications now have a new "sequence" publication action, which is
enabled by default. This determines whether the publication decodes
sequences or what.
FOR ALL SEQUENCES
-----------------
It should be possible to create FOR ALL SEQUENCES publications, just
like we have FOR ALL TABLES. But this produces shift/reduce conflicts
in the grammar, and I didn't bother dealing with that. So for now it's
required to do ALTER PUBLICATION ... [ADD | DROP] SEQUENCE ...
no streaming support yet
------------------------
There's no supoprt for streaming of in-progress transactions yet, but
should be trivial to add.
GetCurrentTransactionId() in nextval
------------------------------------
There's a bit annoying behavior of nextval() - if you do this:
BEGIN;
CREATE SEQUENCE s;
SAVEPOINT a;
SELECT nextval('s') FROM generate_series(1,100) s(i);
COMMIT;
then the WAL record for nextval (right after the savepoint) will have
XID 0 (easy to see in pg_waldump). That's kinda strange, and it causes
problems in DecodeSequence() when calling
SnapBuildProcessChange(builder, xid, buf->origptr)
for transactional changes, because that expects a valid XID. Fixing
this required adding GetCurrentTransactionId() to nextval() and two
other functions, which were only doing
if (RelationNeedsWAL(seqrel))
GetTopTransactionId();
so far. I'm not sure if this has some particularly bad consequences.
regards
[1]: /messages/by-id/1710ed7e13b.cd7177461430746.3372264562543607781@highgo.ca
/messages/by-id/1710ed7e13b.cd7177461430746.3372264562543607781@highgo.ca
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
sequence-decoding-20210608.patchtext/x-patch; charset=UTF-8; name=sequence-decoding-20210608.patchDownload
diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index 9a31e0b879..56ddc3abae 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -5,7 +5,8 @@ PGFILEDESC = "test_decoding - example of a logical decoding output plugin"
REGRESS = ddl xact rewrite toast permissions decoding_in_xact \
decoding_into_rel binary prepared replorigin time messages \
- spill slot truncate stream stats twophase twophase_stream
+ spill slot truncate stream stats twophase twophase_stream \
+ sequence
ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \
oldest_xmin snapshot_transfer subxact_without_top concurrent_stream \
twophase_snapshot
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
new file mode 100644
index 0000000000..07f3e3e92c
--- /dev/null
+++ b/contrib/test_decoding/expected/sequence.out
@@ -0,0 +1,327 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+ ?column?
+----------
+ init
+(1 row)
+
+CREATE SEQUENCE test_sequence;
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4
+(1 row)
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 14
+(1 row)
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4000
+(1 row)
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, true);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, false);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3500
+(1 row)
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ COMMIT
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 4, log_cnt: 0 is_called: 1
+ COMMIT
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 334, log_cnt: 0 is_called: 1
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 14, log_cnt: 32 is_called: 1
+ COMMIT
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 14, log_cnt: 0 is_called: 1
+ COMMIT
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 4000, log_cnt: 0 is_called: 0
+ COMMIT
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 4320, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3500, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3830, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3500, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3830, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3500, log_cnt: 0 is_called: 0
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3820, log_cnt: 0 is_called: 1
+(24 rows)
+
+DROP SEQUENCE test_sequence;
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3002
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3003
+(1 row)
+
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 6001
+(1 row)
+
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3003 | 30 | t
+(1 row)
+
+SELECT nextval('test_table_a_seq');
+ nextval
+---------
+ 3004
+(1 row)
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+---------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_table_a_seq transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ COMMIT
+ sequence: public.test_table_a_seq transactional: 0 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ sequence: public.test_table_a_seq transactional: 0 created: 0 last_value: 3000, log_cnt: 0 is_called: 1
+ sequence: public.test_table_a_seq transactional: 0 created: 0 last_value: 3033, log_cnt: 0 is_called: 1
+ BEGIN
+ COMMIT
+(8 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+-------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_table_a_seq transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ sequence: public.test_table_a_seq transactional: 1 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ COMMIT
+(6 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ setval
+--------
+ 5000
+(1 row)
+
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3001 | 32 | t
+(1 row)
+
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ sequence: public.test_sequence transactional: 1 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 1 created: 0 last_value: 3000, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 1 created: 0 last_value: 3033, log_cnt: 0 is_called: 1
+ COMMIT
+(6 rows)
+
+SELECT pg_drop_replication_slot('regression_slot');
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
new file mode 100644
index 0000000000..d8a34738f3
--- /dev/null
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -0,0 +1,119 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+
+CREATE SEQUENCE test_sequence;
+
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, true);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, false);
+SELECT nextval('test_sequence');
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+DROP SEQUENCE test_sequence;
+
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+SELECT nextval('test_table_a_seq');
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index de1b692658..162e3c18f0 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -35,6 +35,7 @@ typedef struct
bool include_timestamp;
bool skip_empty_xacts;
bool only_local;
+ bool skip_sequences;
} TestDecodingData;
/*
@@ -76,6 +77,10 @@ static void pg_decode_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
TransactionId xid,
const char *gid);
@@ -141,6 +146,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->filter_by_origin_cb = pg_decode_filter;
cb->shutdown_cb = pg_decode_shutdown;
cb->message_cb = pg_decode_message;
+ cb->sequence_cb = pg_decode_sequence;
cb->filter_prepare_cb = pg_decode_filter_prepare;
cb->begin_prepare_cb = pg_decode_begin_prepare_txn;
cb->prepare_cb = pg_decode_prepare_txn;
@@ -175,6 +181,9 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
data->skip_empty_xacts = false;
data->only_local = false;
+ /* skip sequences by default for backwards compatibility */
+ data->skip_sequences = true;
+
ctx->output_plugin_private = data;
opt->output_type = OUTPUT_PLUGIN_TEXTUAL_OUTPUT;
@@ -265,6 +274,17 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "skip-sequences") == 0)
+ {
+
+ if (elem->arg == NULL)
+ continue; /* true by default */
+ else if (!parse_bool(strVal(elem->arg), &data->skip_sequences))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not parse value \"%s\" for parameter \"%s\"",
+ strVal(elem->arg), elem->defname)));
+ }
else
{
ereport(ERROR,
@@ -744,6 +764,26 @@ pg_decode_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+ char *nspname = get_namespace_name(RelationGetNamespace(rel));
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfo(ctx->out, "sequence: %s.%s transactional: %d created: %d last_value: %zu, log_cnt: %zu is_called: %d",
+ nspname, RelationGetRelationName(rel),
+ transactional, created, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
static void
pg_decode_stream_start(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn)
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 16493209c6..ff8a6f102c 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -9429,6 +9429,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11263,6 +11268,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index faa114b2c6..c68a1573de 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -24,6 +24,9 @@ PostgreSQL documentation
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> ADD TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> DROP TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> ADD SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> DROP SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> OWNER TO { <replaceable>new_owner</replaceable> | CURRENT_ROLE | CURRENT_USER | SESSION_USER }
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <replaceable>new_name</replaceable>
@@ -50,7 +53,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -62,7 +76,8 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<para>
You must own the publication to use <command>ALTER PUBLICATION</command>.
- Adding a table to a publication additionally requires owning that table.
+ Adding a table to a publication additionally requires owning that table,
+ and the same requirement applies to sequences.
To alter the owner, you must also be a direct or indirect member of the new
owning role. The new owner must have <literal>CREATE</literal> privilege on
the database. Also, the new owner of a <literal>FOR ALL TABLES</literal>
@@ -97,6 +112,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 367ac814f4..3412ca68e9 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -138,8 +138,8 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<term><literal>REFRESH PUBLICATION</literal></term>
<listitem>
<para>
- Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ Fetch missing table and sequence information from publisher. This will start
+ replication of tables and sequences that were added to the subscribed-to publications
since the last invocation of <command>REFRESH PUBLICATION</command> or
since <command>CREATE SUBSCRIPTION</command>.
</para>
@@ -156,7 +156,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
Specifies whether the existing data in the publications that are
being subscribed to should be copied once the replication starts.
The default is <literal>true</literal>. (Previously subscribed
- tables are not copied.)
+ tables and sequences are not copied.)
</para>
</listitem>
</varlistentry>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 86e415af89..427582f36a 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -51,26 +51,27 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("\"%s\" is not a table",
+ errmsg("\"%s\" is not a table or sequence",
RelationGetRelationName(targetrel)),
- errdetail("Only tables can be added to publications.")));
+ errdetail("Only tables and sequences can be added to publications.")));
- /* Can't be system table */
+ /* Can't be system table/sequence */
if (IsCatalogRelation(targetrel))
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("\"%s\" is a system table",
+ errmsg("\"%s\" is a system table or sequence",
RelationGetRelationName(targetrel)),
- errdetail("System tables cannot be added to publications.")));
+ errdetail("System tables / sequences cannot be added to publications.")));
/* UNLOGGED and TEMP relations cannot be part of publication. */
if (!RelationIsPermanent(targetrel))
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("table \"%s\" cannot be replicated",
+ errmsg("table or sequence \"%s\" cannot be replicated",
RelationGetRelationName(targetrel)),
errdetail("Temporary and unlogged relations cannot be replicated.")));
}
@@ -98,7 +99,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -271,6 +273,10 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+ /* skip sequences here */
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ continue;
+
if (get_rel_relkind(pubrel->prrelid) == RELKIND_PARTITIONED_TABLE &&
pub_partopt != PUBLICATION_PART_ROOT)
{
@@ -304,6 +310,49 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of relation oids for a publication (sequences only).
+ *
+ * This should only be used for normal publications, the FOR ALL TABLES
+ * should use GetAllSequencesPublicationRelations().
+ */
+List *
+GetPublicationSequenceRelations(Oid pubid)
+{
+ List *result;
+ Relation pubrelsrel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications associated with the relation. */
+ pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_rel_prpubid,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(pubid));
+
+ scan = systable_beginscan(pubrelsrel, PublicationRelPrrelidPrpubidIndexId,
+ true, NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Form_pg_publication_rel pubrel;
+
+ pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ }
+
+ systable_endscan(scan);
+ table_close(pubrelsrel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of publication oids for publications marked as FOR ALL TABLES.
*/
@@ -404,6 +453,46 @@ GetAllTablesPublicationRelations(bool pubviaroot)
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -426,10 +515,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -555,3 +646,56 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ sequences = GetPublicationSequenceRelations(publication->oid);
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 999d984068..16f050023a 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -372,6 +372,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPT.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 95c253c8e0..5e5c5feab4 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -54,6 +55,12 @@ static void PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt);
static void PublicationDropTables(Oid pubid, List *rels, bool missing_ok);
+static List *OpenSequenceList(List *sequences);
+static void CloseSequenceList(List *rels);
+static void PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt);
+static void PublicationDropSequences(Oid pubid, List *rels, bool missing_ok);
+
static void
parse_publication_options(List *options,
bool *publish_given,
@@ -71,6 +78,7 @@ parse_publication_options(List *options,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -97,6 +105,7 @@ parse_publication_options(List *options,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -119,6 +128,8 @@ parse_publication_options(List *options,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -210,6 +221,8 @@ CreatePublication(CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -374,9 +387,9 @@ AlterPublicationTables(AlterPublicationStmt *stmt, Relation rel,
rels = OpenTableList(stmt->tables);
- if (stmt->tableAction == DEFELEM_ADD)
+ if (stmt->action == DEFELEM_ADD)
PublicationAddTables(pubid, rels, false, stmt);
- else if (stmt->tableAction == DEFELEM_DROP)
+ else if (stmt->action == DEFELEM_DROP)
PublicationDropTables(pubid, rels, false);
else /* DEFELEM_SET */
{
@@ -427,6 +440,82 @@ AlterPublicationTables(AlterPublicationStmt *stmt, Relation rel,
CloseTableList(rels);
}
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, Relation rel,
+ HeapTuple tup)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ Assert(list_length(stmt->sequences) > 0);
+
+ rels = OpenSequenceList(stmt->sequences);
+
+ if (stmt->action == DEFELEM_ADD)
+ PublicationAddSequences(pubid, rels, false, stmt);
+ else if (stmt->action == DEFELEM_DROP)
+ PublicationDropSequences(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ Relation newrel = (Relation) lfirst(newlc);
+
+ if (RelationGetRelid(newrel) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+
+ if (!found)
+ {
+ Relation oldrel = relation_open(oldrelid,
+ ShareUpdateExclusiveLock);
+
+ delrels = lappend(delrels, oldrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropSequences(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddSequences(pubid, rels, true, stmt);
+
+ CloseSequenceList(delrels);
+ }
+
+ CloseSequenceList(rels);
+}
+
/*
* Alter the existing publication.
*
@@ -460,8 +549,12 @@ AlterPublication(AlterPublicationStmt *stmt)
if (stmt->options)
AlterPublicationOptions(stmt, rel, tup);
- else
+ else if (stmt->tables)
AlterPublicationTables(stmt, rel, tup);
+ else if (stmt->sequences)
+ AlterPublicationSequences(stmt, rel, tup);
+ else
+ Assert(false);
/* Cleanup. */
heap_freetuple(tup);
@@ -666,6 +759,139 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
}
}
+/*
+ * Open relations specified by a RangeVar list.
+ * The returned tables are locked in ShareUpdateExclusiveLock mode in order to
+ * add them to a publication.
+ */
+static List *
+OpenSequenceList(List *sequences)
+{
+ List *relids = NIL;
+ List *rels = NIL;
+ ListCell *lc;
+
+ /*
+ * Open, share-lock, and check all the explicitly-specified relations
+ */
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = castNode(RangeVar, lfirst(lc));
+ Relation rel;
+ Oid myrelid;
+
+ /* Allow query cancel in case this takes a long time */
+ CHECK_FOR_INTERRUPTS();
+
+ rel = relation_openrv(rv, ShareUpdateExclusiveLock);
+ myrelid = RelationGetRelid(rel);
+
+ /*
+ * Filter out duplicates if user specifies "foo, foo".
+ *
+ * Note that this algorithm is known to not be very efficient (O(N^2))
+ * but given that it only works on list of sequences given to us by user
+ * it's deemed acceptable.
+ */
+ if (list_member_oid(relids, myrelid))
+ {
+ relation_close(rel, ShareUpdateExclusiveLock);
+ continue;
+ }
+
+ rels = lappend(rels, rel);
+ relids = lappend_oid(relids, myrelid);
+ }
+
+ list_free(relids);
+
+ return rels;
+}
+
+/*
+ * Close all relations in the list.
+ */
+static void
+CloseSequenceList(List *rels)
+{
+ ListCell *lc;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+
+ relation_close(rel, NoLock);
+ }
+}
+
+/*
+ * Add listed tables to the publication.
+ */
+static void
+PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt)
+{
+ ListCell *lc;
+
+ Assert(!stmt || !stmt->for_all_sequences);
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ ObjectAddress obj;
+
+ /* Must be owner of the table or superuser. */
+ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind),
+ RelationGetRelationName(rel));
+
+ obj = publication_add_relation(pubid, rel, if_not_exists);
+ if (stmt)
+ {
+ EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
+ (Node *) stmt);
+
+ InvokeObjectPostCreateHook(PublicationRelRelationId,
+ obj.objectId, 0);
+ }
+ }
+}
+
+/*
+ * Remove listed sequences from the publication.
+ */
+static void
+PublicationDropSequences(Oid pubid, List *rels, bool missing_ok)
+{
+ ObjectAddress obj;
+ ListCell *lc;
+ Oid prid;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ Oid relid = RelationGetRelid(rel);
+
+ prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
+ ObjectIdGetDatum(relid),
+ ObjectIdGetDatum(pubid));
+ if (!OidIsValid(prid))
+ {
+ if (missing_ok)
+ continue;
+
+ ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+ errmsg("relation \"%s\" is not part of the publication",
+ RelationGetRelationName(rel))));
+ }
+
+ ObjectAddressSet(obj, PublicationRelRelationId, prid);
+ performDeletion(&obj, DROP_CASCADE, 0);
+ }
+}
+
+
/*
* Internal workhorse for changing a publication owner
*/
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0415df9ccb..6898a45365 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -94,7 +94,7 @@ static HTAB *seqhashtab = NULL; /* hash table for SeqTable items */
*/
static SeqTableData *last_used_seq = NULL;
-static void fill_seq_with_data(Relation rel, HeapTuple tuple);
+static void fill_seq_with_data(Relation rel, HeapTuple tuple, bool create);
static Relation lock_and_open_sequence(SeqTable seq);
static void create_seq_hashtable(void);
static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
@@ -222,7 +222,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
/* now initialize the sequence's data */
tuple = heap_form_tuple(tupDesc, value, null);
- fill_seq_with_data(rel, tuple);
+ fill_seq_with_data(rel, tuple, true);
/* process OWNED BY if given */
if (owned_by)
@@ -327,7 +327,86 @@ ResetSequence(Oid seq_relid)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seq_rel, tuple);
+ fill_seq_with_data(seq_rel, tuple, true);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple, false);
/* Clear local cache so that we don't think we have cached numbers */
/* Note that we do not change the currval() state */
@@ -340,7 +419,7 @@ ResetSequence(Oid seq_relid)
* Initialize a sequence's relation with the specified tuple as content
*/
static void
-fill_seq_with_data(Relation rel, HeapTuple tuple)
+fill_seq_with_data(Relation rel, HeapTuple tuple, bool create)
{
Buffer buf;
Page page;
@@ -378,8 +457,21 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(rel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX Not sure if this is the best solution.
+ */
+ GetCurrentTransactionId();
+ }
+
START_CRIT_SECTION();
MarkBufferDirty(buf);
@@ -399,6 +491,7 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = rel->rd_node;
+ xlrec.created = create;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) tuple->t_data, tuple->t_len);
@@ -502,7 +595,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seqrel, newdatatuple);
+ fill_seq_with_data(seqrel, newdatatuple, true);
}
/* process OWNED BY if given */
@@ -766,8 +859,21 @@ nextval_internal(Oid relid, bool check_permissions)
* (Have to do that here, so we're outside the critical section)
*/
if (logit && RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX Not sure if this is the best solution.
+ */
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -803,6 +909,7 @@ nextval_internal(Oid relid, bool check_permissions)
seq->log_cnt = 0;
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -977,8 +1084,21 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX Not sure if this is the best solution.
+ */
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -999,6 +1119,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
+
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 8aa6de1785..2112be663b 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -47,6 +47,7 @@
#include "utils/syscache.h"
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -461,6 +462,7 @@ CreateSubscription(CreateSubscriptionStmt *stmt, bool isTopLevel)
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -498,6 +500,26 @@ CreateSubscription(CreateSubscriptionStmt *stmt, bool isTopLevel)
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -644,6 +666,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -735,6 +761,185 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ elog(WARNING, "B: remove rel %d", relid);
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1534,6 +1739,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.tablename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 1e285e0349..a1c7880be6 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -619,11 +619,11 @@ CheckSubscriptionRelkind(char relkind, const char *nspname,
errdetail("\"%s.%s\" is a foreign table.",
nspname, relname)));
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
nspname, relname),
- errdetail("\"%s.%s\" is not a table.",
+ errdetail("\"%s.%s\" is not a table or a sequence.",
nspname, relname)));
}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 621f7ce068..b98c1b2f6e 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4831,8 +4831,10 @@ _copyAlterPublicationStmt(const AlterPublicationStmt *from)
COPY_STRING_FIELD(pubname);
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(tables);
+ COPY_NODE_FIELD(sequences);
COPY_SCALAR_FIELD(for_all_tables);
- COPY_SCALAR_FIELD(tableAction);
+ COPY_SCALAR_FIELD(for_all_sequences);
+ COPY_SCALAR_FIELD(action);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 3033c1934c..b72a5831e3 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2300,8 +2300,10 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_STRING_FIELD(pubname);
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(tables);
+ COMPARE_NODE_FIELD(sequences);
COMPARE_SCALAR_FIELD(for_all_tables);
- COMPARE_SCALAR_FIELD(tableAction);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
+ COMPARE_SCALAR_FIELD(action);
return true;
}
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9ee90e3f13..880883e3b8 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -434,6 +434,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
%type <node> group_by_item empty_grouping_set rollup_clause cube_clause
%type <node> grouping_sets_clause
%type <node> opt_publication_for_tables publication_for_tables
+/* %type <node> opt_publication_for_sequences publication_for_sequences */
%type <list> opt_fdw_options fdw_options
%type <defelt> fdw_option
@@ -9612,6 +9613,7 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
*****************************************************************************/
CreatePublicationStmt:
+ /* CREATE PUBLICATION name opt_publication_for_tables opt_publication_for_sequences opt_definition */
CREATE PUBLICATION name opt_publication_for_tables opt_definition
{
CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
@@ -9646,6 +9648,12 @@ publication_for_tables:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
@@ -9657,6 +9665,12 @@ publication_for_tables:
*
* ALTER PUBLICATION name SET TABLE table [, table2]
*
+ * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]
+ *
*****************************************************************************/
AlterPublicationStmt:
@@ -9672,7 +9686,7 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_ADD;
+ n->action = DEFELEM_ADD;
$$ = (Node *)n;
}
| ALTER PUBLICATION name SET TABLE relation_expr_list
@@ -9680,7 +9694,7 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_SET;
+ n->action = DEFELEM_SET;
$$ = (Node *)n;
}
| ALTER PUBLICATION name DROP TABLE relation_expr_list
@@ -9688,7 +9702,31 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_DROP;
+ n->action = DEFELEM_DROP;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name ADD_P SEQUENCE relation_expr_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_ADD;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name SET SEQUENCE relation_expr_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_SET;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name DROP SEQUENCE relation_expr_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_DROP;
$$ = (Node *)n;
}
;
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 70670169ac..f6296941d9 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -42,6 +42,7 @@
#include "replication/reorderbuffer.h"
#include "replication/snapbuild.h"
#include "storage/standby.h"
+#include "commands/sequence.h"
typedef struct XLogRecordBuffer
{
@@ -74,10 +75,11 @@ static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
bool two_phase);
static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_prepare *parsed);
-
+static void DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
/* common function to decode tuples */
static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);
+static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);
/* helper functions for decoding transactions */
static inline bool FilterPrepare(LogicalDecodingContext *ctx,
@@ -158,6 +160,10 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
DecodeLogicalMsgOp(ctx, &buf);
break;
+ case RM_SEQ_ID:
+ DecodeSequence(ctx, &buf);
+ break;
+
/*
* Rmgrs irrelevant for logical decoding; they describe stuff not
* represented in logical decoding. Add new rmgrs in rmgrlist.h's
@@ -173,7 +179,6 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
case RM_HASH_ID:
case RM_GIN_ID:
case RM_GIST_ID:
- case RM_SEQ_ID:
case RM_SPGIST_ID:
case RM_BRIN_ID:
case RM_COMMIT_TS_ID:
@@ -1315,3 +1320,125 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
(txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
+
+/*
+ * Decode Sequence Tuple
+ */
+static void
+DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
+{
+ int datalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;
+
+ Assert(datalen >= 0);
+
+ tuple->tuple.t_len = datalen + SizeofHeapTupleHeader;;
+
+ ItemPointerSetInvalid(&tuple->tuple.t_self);
+
+ tuple->tuple.t_tableOid = InvalidOid;
+
+ memcpy(((char *) tuple->tuple.t_data),
+ data + sizeof(xl_seq_rec),
+ SizeofHeapTupleHeader);
+
+ memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,
+ data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,
+ datalen);
+}
+
+/*
+ * Handle sequence decode
+ *
+ * Decoding sequences is a bit tricky, because while most sequence actions
+ * are non-transactional (not subject to rollback), some need to be handled
+ * as transactional.
+ *
+ * By default, a sequence increment is non-transactional - we must not queue
+ * it in a transaction as other changes, because the transaction might get
+ * rolled back and we'd discard the increment. The downstream would not be
+ * notified about the increment, which is wrong.
+ *
+ * On the other hand, the sequence may be created in a transaction. In this
+ * case we *should* queue the change as other changes in the transaction,
+ * because we don't want to send the increments for unknown sequence to the
+ * plugin - it might get confused about which sequence it's related to etc.
+ */
+static void
+DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ SnapBuild *builder = ctx->snapshot_builder;
+ ReorderBufferTupleBuf *tuplebuf;
+ RelFileNode target_node;
+ XLogReaderState *r = buf->record;
+ char *tupledata = NULL;
+ Size tuplelen;
+ Size datalen = 0;
+ TransactionId xid = XLogRecGetXid(r);
+ uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
+ xl_seq_rec *xlrec;
+ Snapshot snapshot;
+ RepOriginId origin_id = XLogRecGetOrigin(r);
+ bool transactional;
+
+ /* only decode changes flagged with XLOG_SEQ_LOG */
+ if (info != XLOG_SEQ_LOG)
+ elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
+
+ /*
+ * If we don't have snapshot or we are just fast-forwarding, there is no
+ * point in decoding messages.
+ */
+ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||
+ ctx->fast_forward)
+ return;
+
+ /* only interested in our database */
+ XLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);
+ if (target_node.dbNode != ctx->slot->data.database)
+ return;
+
+ /* output plugin doesn't look for this origin, no need to queue */
+ if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
+ return;
+
+ tupledata = XLogRecGetData(r);
+ datalen = XLogRecGetDataLen(r);
+ tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);
+
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_seq_rec *) XLogRecGetData(r);
+
+ /* XXX how could we have sequence change without data? */
+ if(!datalen || !tupledata)
+ return;
+
+ tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
+ DecodeSeqTuple(tupledata, datalen, tuplebuf);
+
+ /*
+ * Should we handle the sequence increment as transactional or not?
+ *
+ * If the sequence was created in a still-running transaction, treat
+ * it as transactional and queue the increments. Otherwise it needs
+ * to be treated as non-transactional, in which case we send it to
+ * the plugin right away.
+ */
+ transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
+ target_node,
+ xlrec->created);
+
+ /* Skip the change if already processed (per the snapshot). */
+ if (transactional &&
+ !SnapBuildProcessChange(builder, xid, buf->origptr))
+ return;
+ else if (!transactional &&
+ (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
+ SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ return;
+
+ /* Queue the increment (or send immediately if not transactional). */
+ snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
+ ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
+ origin_id, target_node, transactional,
+ xlrec->created, tuplebuf);
+}
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index ffc6160e9f..66a4a4603a 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -73,6 +73,10 @@ static void truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
static void message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
/* streaming callbacks */
static void stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
@@ -217,6 +221,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->apply_truncate = truncate_cb_wrapper;
ctx->reorder->commit = commit_cb_wrapper;
ctx->reorder->message = message_cb_wrapper;
+ ctx->reorder->sequence = sequence_cb_wrapper;
/*
* To support streaming, we require start/stop/abort/commit/change
@@ -1179,6 +1184,43 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ if (ctx->callbacks.sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ created, last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr first_lsn)
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 1cf59e0fb0..f9e928f906 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -389,6 +389,58 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint8(out, created);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->created = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 2d9e1279bb..94b0290d78 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -116,6 +116,13 @@ typedef struct ReorderBufferTXNByIdEnt
ReorderBufferTXN *txn;
} ReorderBufferTXNByIdEnt;
+/* entry for hash table we use to track sequences created in running xacts */
+typedef struct ReorderBufferSequenceEnt
+{
+ RelFileNode rnode;
+ TransactionId xid;
+} ReorderBufferSequenceEnt;
+
/* data structures for (relfilenode, ctid) => (cmin, cmax) mapping */
typedef struct ReorderBufferTupleCidKey
{
@@ -337,6 +344,14 @@ ReorderBufferAllocate(void)
buffer->by_txn = hash_create("ReorderBufferByXid", 1000, &hash_ctl,
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+ /* hash table of sequences, mapping relfilenode to XID of transaction */
+ hash_ctl.keysize = sizeof(RelFileNode);
+ hash_ctl.entrysize = sizeof(ReorderBufferSequenceEnt);
+ hash_ctl.hcxt = buffer->context;
+
+ buffer->sequences = hash_create("ReorderBufferSequenceHash", 1000, &hash_ctl,
+ HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+
buffer->by_txn_last_xid = InvalidTransactionId;
buffer->by_txn_last_txn = NULL;
@@ -523,6 +538,13 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
break;
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
+ change->data.sequence.tuple = NULL;
+ }
+ break;
}
pfree(change);
@@ -850,6 +872,212 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
}
}
+/*
+ * Treat the sequence increment as transactional?
+ */
+bool
+ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created)
+{
+ bool found = false;
+
+ if (created)
+ return true;
+
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND,
+ &found);
+
+ return found;
+}
+
+
+/*
+ * A transactional sequence increment is queued to be processed upon commit
+ * and a non-transactional increment gets processed immediately.
+ *
+ * A sequence update may be both transactional and non-transactional. When
+ * created in a running transaction, treat it as transactional and queue
+ * the change in it. Otherwise treat it as non-transactional, so that we
+ * don't forget the increment in case of a rollback.
+ */
+void
+ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf)
+{
+ /*
+ * Change needs to be handled as transactional, because the sequence was
+ * created in a transaction that is still running. In that case all the
+ * changes need to be queued in that transaction, we must not send them
+ * to the downstream until the transaction commits.
+ *
+ * There's a bit of a trouble with subtransactions - we can't queue it
+ * into the subxact, because it might be rolled back and we'd lose the
+ * increment. We need to queue it into the same (sub)xact that created
+ * the sequence, which is why we track the XID in the hash table.
+ */
+ if (transactional)
+ {
+ MemoryContext oldcontext;
+ ReorderBufferChange *change;
+
+ /* lookup sequence by relfilenode */
+ ReorderBufferSequenceEnt *ent;
+ bool found;
+
+ /* transactional changes require a transaction */
+ Assert(xid != InvalidTransactionId);
+
+ /* search the lookup table (we ignore the return value, found is enough) */
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ created ? HASH_ENTER : HASH_FIND,
+ &found);
+
+ /*
+ * If this is the "create" increment, we must not have found any
+ * pre-existing entry in the hash table (i.e. there must not be
+ * any conflicting sequence).
+ */
+ Assert(!(created && found));
+
+ /* But we must have either created or found an existing entry. */
+ Assert(created || found);
+
+ /*
+ * When creating the sequence, remember the XID of the transaction
+ * that created id.
+ */
+ if (created)
+ ent->xid = xid;
+
+ /* XXX Maybe check that we're still in the same top-level xact? */
+
+ /* OK, allocate and queue the change */
+ oldcontext = MemoryContextSwitchTo(rb->context);
+
+ change = ReorderBufferGetChange(rb);
+
+ change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
+ change->origin_id = origin_id;
+
+ memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
+
+ change->data.sequence.created = created;
+ change->data.sequence.tuple = tuplebuf;
+
+ /* add it to the same subxact that created the sequence */
+ ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+ else
+ {
+ /*
+ * This increment is for a sequence that was not created in any
+ * running transaction, so we treat it as non-transactional and
+ * just send it to the output plugin directly.
+ */
+ ReorderBufferTXN *txn = NULL;
+ volatile Snapshot snapshot_now = snapshot;
+ bool using_subtxn;
+
+#ifdef USE_ASSERT_CHECKING
+ /* All "creates" have to be handled as transactional. */
+ Assert(!created);
+
+ /* Make sure the sequence is not in the hash table. */
+ {
+ bool found;
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND, &found);
+ Assert(!found);
+ }
+#endif
+
+ if (xid != InvalidTransactionId)
+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
+
+ /* setup snapshot to allow catalog access */
+ SetupHistoricSnapshot(snapshot_now, NULL);
+
+ /*
+ * Decoding needs access to syscaches et al., which in turn use
+ * heavyweight locks and such. Thus we need to have enough state around to
+ * keep track of those. The easiest way is to simply use a transaction
+ * internally. That also allows us to easily enforce that nothing writes
+ * to the database by checking for xid assignments.
+ *
+ * When we're called via the SQL SRF there's already a transaction
+ * started, so start an explicit subtransaction there.
+ */
+ using_subtxn = IsTransactionOrTransactionBlock();
+
+ PG_TRY();
+ {
+ Relation relation;
+ HeapTuple tuple;
+ bool isnull;
+ int64 last_value, log_cnt;
+ bool is_called;
+ Oid reloid;
+
+ if (using_subtxn)
+ BeginInternalSubTransaction("sequence");
+ else
+ StartTransactionCommand();
+
+ reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(rnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+ tuple = &tuplebuf->tuple;
+
+ /*
+ * Extract the internal sequence values, describing the state.
+ *
+ * XXX Seems a bit strange to access it directly. Maybe there's
+ * a better / more correct way?
+ */
+ last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
+ log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
+ is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+
+ rb->sequence(rb, txn, lsn, relation, transactional, created,
+ last_value, log_cnt, is_called);
+
+ RelationClose(relation);
+
+ TeardownHistoricSnapshot(false);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+ }
+ PG_CATCH();
+ {
+ TeardownHistoricSnapshot(true);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+ }
+}
+
/*
* AssertTXNLsnOrder
* Verify LSN ordering of transaction lists in the reorderbuffer
@@ -1526,6 +1754,31 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
&found);
Assert(found);
+ /*
+ * Remove sequences created in this transaction (if any).
+ *
+ * There's no way to search by XID, so we simply do a seqscan of all
+ * the entries in the hash table. Hopefully there are only a couple
+ * entries in most cases - people generally don't create many new
+ * sequences over and over.
+ */
+ {
+ HASH_SEQ_STATUS scan_status;
+ ReorderBufferSequenceEnt *ent;
+
+ hash_seq_init(&scan_status, rb->sequences);
+ while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
+ {
+ /* skip sequences not from this transaction */
+ if (ent->xid != txn->xid)
+ continue;
+
+ (void) hash_search(rb->sequences,
+ (void *) &(ent->rnode),
+ HASH_REMOVE, NULL);
+ }
+ }
+
/* remove entries spilled to disk */
if (rbtxn_is_serialized(txn))
ReorderBufferRestoreCleanup(rb, txn);
@@ -1941,6 +2194,39 @@ ReorderBufferApplyMessage(ReorderBuffer *rb, ReorderBufferTXN *txn,
change->data.msg.message);
}
+/*
+ * Helper function for ReorderBufferProcessTXN for applying sequences.
+ */
+static inline void
+ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ Relation relation, ReorderBufferChange *change,
+ bool streaming)
+{
+ HeapTuple tuple;
+ bool isnull;
+ int64 last_value, log_cnt;
+ bool is_called;
+
+ /* FIXME support streaming */
+ Assert(!streaming);
+
+ tuple = &change->data.sequence.tuple->tuple;
+
+ /*
+ * Extract the internal sequence values, describing the state.
+ *
+ * XXX Seems a bit strange to access it directly. Maybe there's
+ * a better / more correct way?
+ */
+ last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
+ log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
+ is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+
+ rb->sequence(rb, txn, change->lsn, relation, true, /* gotta be transactional */
+ change->data.sequence.created,
+ last_value, log_cnt, is_called);
+}
+
/*
* Function to store the command id and snapshot at the end of the current
* stream so that we can reuse the same while sending the next stream.
@@ -2357,6 +2643,31 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
elog(ERROR, "tuplecid value in changequeue");
break;
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ Assert(snapshot_now);
+
+ reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
+ change->data.sequence.relnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+
+ if (!RelationIsValid(relation))
+ elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
+ reloid,
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ if (RelationIsLogicallyLogged(relation))
+ ReorderBufferApplySequence(rb, txn, relation, change, streaming);
+
+ RelationClose(relation);
+ break;
}
}
@@ -3751,6 +4062,39 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ char *data;
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
+ /* make sure we have enough space */
+ ReorderBufferSerializeReserve(rb, sz);
+
+ data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
+ /* might have been reallocated above */
+ ondisk = (ReorderBufferDiskChange *) rb->outbuf;
+
+ if (len)
+ {
+ memcpy(data, &tup->tuple, sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ memcpy(data, tup->tuple.t_data, len);
+ data += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4014,6 +4358,22 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4314,6 +4674,30 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ uint32 tuplelen = ((HeapTuple) data)->t_len;
+
+ change->data.sequence.tuple =
+ ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
+
+ /* restore ->tuple */
+ memcpy(&change->data.sequence.tuple->tuple, data,
+ sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ /* reset t_data pointer into the new tuplebuf */
+ change->data.sequence.tuple->tuple.t_data =
+ ReorderBufferTupleBufData(change->data.sequence.tuple);
+
+ /* restore tuple data itself */
+ memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
+ data += tuplelen;
+ }
+ break;
+
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 67f907cdd9..d1ba2a9645 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -99,6 +99,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -353,6 +354,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -870,6 +877,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ elog(WARNING, "sequence %s info last_value %ld log_cnt %ld is_called %d",
+ quote_qualified_identifier(lrel.nspname, lrel.relname),
+ last_value, log_cnt, is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1102,10 +1202,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 6ba447ea97..1e85a65890 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -70,6 +70,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -813,6 +814,36 @@ apply_handle_origin(StringInfo s)
errmsg("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ // FIXME
+ // if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ // return;
+ ensure_transaction();
+
+ logicalrep_read_sequence(s, &seq);
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ elog(WARNING, "applying sequence transactional %d created %d last_value %ld log_cnt %ld is_called %d",
+ seq.transactional, seq.created, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /* Commit the per-stream transaction */
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2017,6 +2048,10 @@ apply_dispatch(StringInfo s)
*/
return;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
return;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index fe12d08a94..37aeb42d22 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -49,6 +49,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_stream_start(struct LogicalDecodingContext *ctx,
@@ -144,6 +148,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->filter_by_origin_cb = pgoutput_origin_filter;
cb->shutdown_cb = pgoutput_shutdown;
@@ -166,11 +171,13 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
data->binary = false;
data->streaming = false;
data->messages = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -236,6 +243,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -753,6 +770,47 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(rel))
+ return;
+
+ /* XXX handle (in_streaming) here */
+
+ relentry = get_rel_sync_entry(data, RelationGetRelid(rel));
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ rel,
+ xid,
+ sequence_lsn,
+ transactional,
+ created,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1026,7 +1084,8 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->streamed_txns = NIL;
entry->replicate_valid = false;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->publish_as_relid = InvalidOid;
entry->map = NULL; /* will be set by maybe_send_schema() if needed */
}
@@ -1037,6 +1096,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
List *pubids = GetRelationPublications(relid);
ListCell *lc;
Oid publish_as_relid = relid;
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1060,12 +1120,23 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1116,10 +1187,12 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
}
if (entry->pubactions.pubinsert && entry->pubactions.pubupdate &&
- entry->pubactions.pubdelete && entry->pubactions.pubtruncate)
+ entry->pubactions.pubdelete && entry->pubactions.pubtruncate &&
+ entry->pubactions.pubsequence)
break;
}
@@ -1264,5 +1337,6 @@ rel_sync_cache_publication_cb(Datum arg, int cacheid, uint32 hashvalue)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
}
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index fd05615e76..890829bc0c 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5487,6 +5487,7 @@ GetRelationPublicationActions(Relation relation)
pubactions->pubupdate |= pubform->pubupdate;
pubactions->pubdelete |= pubform->pubdelete;
pubactions->pubtruncate |= pubform->pubtruncate;
+ pubactions->pubsequence |= pubform->pubsequence;
ReleaseSysCache(tup);
@@ -5495,7 +5496,8 @@ GetRelationPublicationActions(Relation relation)
* other publications.
*/
if (pubactions->pubinsert && pubactions->pubupdate &&
- pubactions->pubdelete && pubactions->pubtruncate)
+ pubactions->pubdelete && pubactions->pubtruncate &&
+ pubactions->pubsequence)
break;
}
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 32c1bdfdca..147745d94c 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1640,7 +1640,7 @@ psql_completion(const char *text, int start, int end)
/* ALTER PUBLICATION <name> */
else if (Matches("ALTER", "PUBLICATION", MatchAny))
- COMPLETE_WITH("ADD TABLE", "DROP TABLE", "OWNER TO", "RENAME TO", "SET");
+ COMPLETE_WITH("ADD TABLE", "DROP TABLE", "ADD SEQUENCE", "DROP SEQUENCE", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
COMPLETE_WITH("(", "TABLE");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index acbcae4607..e3043e0f63 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11442,6 +11442,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 1b31fee9e3..d1c9a18b05 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -74,6 +83,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct Publication
@@ -81,6 +91,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -109,6 +120,9 @@ extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
extern List *GetAllTablesPublications(void);
extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
+extern List *GetPublicationSequenceRelations(Oid pubid);
+
extern bool is_publishable_relation(Relation rel);
extern ObjectAddress publication_add_relation(Oid pubid, Relation targetrel,
bool if_not_exists);
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 40544dd4c7..c28e8695cb 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -48,6 +48,7 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
typedef struct xl_seq_rec
{
RelFileNode node;
+ bool created; /* is this a CREATE SEQUENCE */
/* SEQUENCE TUPLE DATA FOLLOWS AT THE END */
} xl_seq_rec;
@@ -59,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index ef73342019..b45c103b6c 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3624,8 +3624,10 @@ typedef struct AlterPublicationStmt
/* parameters used for ALTER PUBLICATION ... ADD/DROP TABLE */
List *tables; /* List of tables to add/drop */
+ List *sequences; /* List of sequences to add/drop */
bool for_all_tables; /* Special publication for all tables in db */
- DefElemAction tableAction; /* What action to perform with the tables */
+ bool for_all_sequences; /* Special publication for all tables in db */
+ DefElemAction action; /* What action to perform with the tables/sequences */
} AlterPublicationStmt;
typedef struct CreateSubscriptionStmt
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 55b90c03ea..f3a7d18fca 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -55,6 +55,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_STREAM_START = 'S',
LOGICAL_REP_MSG_STREAM_END = 'E',
LOGICAL_REP_MSG_STREAM_COMMIT = 'c',
@@ -107,6 +108,19 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ bool created;
+ int64 last_value;
+ int64 log_cnt; /* XXX probably not needed? */
+ bool is_called; /* XXX probably not needed? */
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -154,6 +168,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/output_plugin.h b/src/include/replication/output_plugin.h
index 810495ed0e..e3e1b03f32 100644
--- a/src/include/replication/output_plugin.h
+++ b/src/include/replication/output_plugin.h
@@ -88,6 +88,19 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
Size message_size,
const char *message);
+/*
+ * Called for the generic logical decoding sequences.
+ */
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ bool created,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Filter changes by origin.
*/
@@ -219,6 +232,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index 51e7c0348d..a5aec2928a 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -27,6 +27,7 @@ typedef struct PGOutputData
bool binary;
bool streaming;
bool messages;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index 0c6e9d1cb9..5190cdf196 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -63,7 +63,8 @@ enum ReorderBufferChangeType
REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,
- REORDER_BUFFER_CHANGE_TRUNCATE
+ REORDER_BUFFER_CHANGE_TRUNCATE,
+ REORDER_BUFFER_CHANGE_SEQUENCE,
};
/* forward declaration */
@@ -157,6 +158,14 @@ typedef struct ReorderBufferChange
uint32 ninvalidations; /* Number of messages */
SharedInvalidationMessage *invalidations; /* invalidation message */
} inval;
+
+ /* Context data for Sequence changes */
+ struct
+ {
+ RelFileNode relnode;
+ bool created;
+ ReorderBufferTupleBuf *tuple;
+ } sequence;
} data;
/*
@@ -425,6 +434,15 @@ typedef void (*ReorderBufferMessageCB) (ReorderBuffer *rb,
const char *prefix, Size sz,
const char *message);
+/* sequence callback signature */
+typedef void (*ReorderBufferSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* begin prepare callback signature */
typedef void (*ReorderBufferBeginPrepareCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn);
@@ -506,6 +524,12 @@ struct ReorderBuffer
*/
HTAB *by_txn;
+ /*
+ * relfilenode => XID lookup table for sequences created in a transaction
+ * (also includes altered sequences, which assigns new relfilenode)
+ */
+ HTAB *sequences;
+
/*
* Transactions that could be a toplevel xact, ordered by LSN of the first
* record bearing that xid.
@@ -536,6 +560,7 @@ struct ReorderBuffer
ReorderBufferApplyTruncateCB apply_truncate;
ReorderBufferCommitCB commit;
ReorderBufferMessageCB message;
+ ReorderBufferSequenceCB sequence;
/*
* Callbacks to be called when streaming a transaction at prepare time.
@@ -630,6 +655,10 @@ void ReorderBufferQueueChange(ReorderBuffer *, TransactionId,
void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapshot, XLogRecPtr lsn,
bool transactional, const char *prefix,
Size message_size, const char *message);
+void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
@@ -677,4 +706,7 @@ void ReorderBufferSetRestartPoint(ReorderBuffer *, XLogRecPtr ptr);
void StartupReorderBuffer(void);
+bool ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created);
+
#endif
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 63d6ab7a4e..2cbad49f2e 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -158,8 +158,8 @@ DROP TABLE testpub_parted1;
DROP PUBLICATION testpub_forparted, testpub_forparted1;
-- fail - view
CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_view;
-ERROR: "testpub_view" is not a table
-DETAIL: Only tables can be added to publications.
+ERROR: "testpub_view" is not a table or sequence
+DETAIL: Only tables and sequences can be added to publications.
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1, pub_test.testpub_nopk;
RESET client_min_messages;
@@ -180,8 +180,8 @@ Tables:
-- fail - view
ALTER PUBLICATION testpub_default ADD TABLE testpub_view;
-ERROR: "testpub_view" is not a table
-DETAIL: Only tables can be added to publications.
+ERROR: "testpub_view" is not a table or sequence
+DETAIL: Only tables and sequences can be added to publications.
ALTER PUBLICATION testpub_default ADD TABLE testpub_tbl1;
ALTER PUBLICATION testpub_default SET TABLE testpub_tbl1;
ALTER PUBLICATION testpub_default ADD TABLE pub_test.testpub_nopk;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index e5ab11275d..393be871be 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1451,6 +1451,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gpt(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gpt.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
Hi,
Seems the cfbot was not entirely happy with the patch on some platforms,
so here's a fixed version. There was a bogus call to ensure_transaction
function (which does not exist at all) and a silly bug in transaction
management in apply_handle_sequence.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Logical-decoding-replication-of-sequences-20210613.patchtext/x-patch; charset=UTF-8; name=0001-Logical-decoding-replication-of-sequences-20210613.patchDownload
From 8c9a36e8e9bd38ddb7e0c449ccdaf0db46ad28d5 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Sun, 13 Jun 2021 23:10:18 +0200
Subject: [PATCH] Logical decoding / replication of sequences
---
contrib/test_decoding/Makefile | 3 +-
contrib/test_decoding/expected/sequence.out | 327 +++++++++++++++
contrib/test_decoding/sql/sequence.sql | 119 ++++++
contrib/test_decoding/test_decoding.c | 40 ++
doc/src/sgml/catalogs.sgml | 71 ++++
doc/src/sgml/ref/alter_publication.sgml | 28 +-
doc/src/sgml/ref/alter_subscription.sgml | 6 +-
src/backend/catalog/pg_publication.c | 160 +++++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 232 ++++++++++-
src/backend/commands/sequence.c | 132 +++++-
src/backend/commands/subscriptioncmds.c | 274 +++++++++++++
src/backend/executor/execReplication.c | 4 +-
src/backend/nodes/copyfuncs.c | 4 +-
src/backend/nodes/equalfuncs.c | 4 +-
src/backend/parser/gram.y | 44 +-
src/backend/replication/logical/decode.c | 131 +++++-
src/backend/replication/logical/logical.c | 42 ++
src/backend/replication/logical/proto.c | 52 +++
.../replication/logical/reorderbuffer.c | 384 ++++++++++++++++++
src/backend/replication/logical/tablesync.c | 118 +++++-
src/backend/replication/logical/worker.c | 56 +++
src/backend/replication/pgoutput/pgoutput.c | 80 +++-
src/backend/utils/cache/relcache.c | 4 +-
src/bin/psql/tab-complete.c | 2 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 14 +
src/include/commands/sequence.h | 2 +
src/include/nodes/parsenodes.h | 4 +-
src/include/replication/logicalproto.h | 20 +
src/include/replication/output_plugin.h | 14 +
src/include/replication/pgoutput.h | 1 +
src/include/replication/reorderbuffer.h | 34 +-
src/test/regress/expected/publication.out | 8 +-
src/test/regress/expected/rules.out | 8 +
35 files changed, 2391 insertions(+), 46 deletions(-)
create mode 100644 contrib/test_decoding/expected/sequence.out
create mode 100644 contrib/test_decoding/sql/sequence.sql
diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index 9a31e0b879..56ddc3abae 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -5,7 +5,8 @@ PGFILEDESC = "test_decoding - example of a logical decoding output plugin"
REGRESS = ddl xact rewrite toast permissions decoding_in_xact \
decoding_into_rel binary prepared replorigin time messages \
- spill slot truncate stream stats twophase twophase_stream
+ spill slot truncate stream stats twophase twophase_stream \
+ sequence
ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \
oldest_xmin snapshot_transfer subxact_without_top concurrent_stream \
twophase_snapshot
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
new file mode 100644
index 0000000000..07f3e3e92c
--- /dev/null
+++ b/contrib/test_decoding/expected/sequence.out
@@ -0,0 +1,327 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+ ?column?
+----------
+ init
+(1 row)
+
+CREATE SEQUENCE test_sequence;
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4
+(1 row)
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 14
+(1 row)
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4000
+(1 row)
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, true);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, false);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3500
+(1 row)
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ COMMIT
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 4, log_cnt: 0 is_called: 1
+ COMMIT
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 334, log_cnt: 0 is_called: 1
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 14, log_cnt: 32 is_called: 1
+ COMMIT
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 14, log_cnt: 0 is_called: 1
+ COMMIT
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 4000, log_cnt: 0 is_called: 0
+ COMMIT
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 4320, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3500, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3830, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3500, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3830, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3500, log_cnt: 0 is_called: 0
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3820, log_cnt: 0 is_called: 1
+(24 rows)
+
+DROP SEQUENCE test_sequence;
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3002
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3003
+(1 row)
+
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 6001
+(1 row)
+
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3003 | 30 | t
+(1 row)
+
+SELECT nextval('test_table_a_seq');
+ nextval
+---------
+ 3004
+(1 row)
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+---------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_table_a_seq transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ COMMIT
+ sequence: public.test_table_a_seq transactional: 0 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ sequence: public.test_table_a_seq transactional: 0 created: 0 last_value: 3000, log_cnt: 0 is_called: 1
+ sequence: public.test_table_a_seq transactional: 0 created: 0 last_value: 3033, log_cnt: 0 is_called: 1
+ BEGIN
+ COMMIT
+(8 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+-------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_table_a_seq transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ sequence: public.test_table_a_seq transactional: 1 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ COMMIT
+(6 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ setval
+--------
+ 5000
+(1 row)
+
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3001 | 32 | t
+(1 row)
+
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ sequence: public.test_sequence transactional: 1 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 1 created: 0 last_value: 3000, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 1 created: 0 last_value: 3033, log_cnt: 0 is_called: 1
+ COMMIT
+(6 rows)
+
+SELECT pg_drop_replication_slot('regression_slot');
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
new file mode 100644
index 0000000000..d8a34738f3
--- /dev/null
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -0,0 +1,119 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+
+CREATE SEQUENCE test_sequence;
+
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, true);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, false);
+SELECT nextval('test_sequence');
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+DROP SEQUENCE test_sequence;
+
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+SELECT nextval('test_table_a_seq');
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index de1b692658..162e3c18f0 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -35,6 +35,7 @@ typedef struct
bool include_timestamp;
bool skip_empty_xacts;
bool only_local;
+ bool skip_sequences;
} TestDecodingData;
/*
@@ -76,6 +77,10 @@ static void pg_decode_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
TransactionId xid,
const char *gid);
@@ -141,6 +146,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->filter_by_origin_cb = pg_decode_filter;
cb->shutdown_cb = pg_decode_shutdown;
cb->message_cb = pg_decode_message;
+ cb->sequence_cb = pg_decode_sequence;
cb->filter_prepare_cb = pg_decode_filter_prepare;
cb->begin_prepare_cb = pg_decode_begin_prepare_txn;
cb->prepare_cb = pg_decode_prepare_txn;
@@ -175,6 +181,9 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
data->skip_empty_xacts = false;
data->only_local = false;
+ /* skip sequences by default for backwards compatibility */
+ data->skip_sequences = true;
+
ctx->output_plugin_private = data;
opt->output_type = OUTPUT_PLUGIN_TEXTUAL_OUTPUT;
@@ -265,6 +274,17 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "skip-sequences") == 0)
+ {
+
+ if (elem->arg == NULL)
+ continue; /* true by default */
+ else if (!parse_bool(strVal(elem->arg), &data->skip_sequences))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not parse value \"%s\" for parameter \"%s\"",
+ strVal(elem->arg), elem->defname)));
+ }
else
{
ereport(ERROR,
@@ -744,6 +764,26 @@ pg_decode_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+ char *nspname = get_namespace_name(RelationGetNamespace(rel));
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfo(ctx->out, "sequence: %s.%s transactional: %d created: %d last_value: %zu, log_cnt: %zu is_called: %d",
+ nspname, RelationGetRelationName(rel),
+ transactional, created, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
static void
pg_decode_stream_start(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn)
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index f517a7d4af..faaa95f56c 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -9428,6 +9428,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11262,6 +11267,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index faa114b2c6..c68a1573de 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -24,6 +24,9 @@ PostgreSQL documentation
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> ADD TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> DROP TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> ADD SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> DROP SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> OWNER TO { <replaceable>new_owner</replaceable> | CURRENT_ROLE | CURRENT_USER | SESSION_USER }
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <replaceable>new_name</replaceable>
@@ -50,7 +53,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -62,7 +76,8 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<para>
You must own the publication to use <command>ALTER PUBLICATION</command>.
- Adding a table to a publication additionally requires owning that table.
+ Adding a table to a publication additionally requires owning that table,
+ and the same requirement applies to sequences.
To alter the owner, you must also be a direct or indirect member of the new
owning role. The new owner must have <literal>CREATE</literal> privilege on
the database. Also, the new owner of a <literal>FOR ALL TABLES</literal>
@@ -97,6 +112,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 367ac814f4..3412ca68e9 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -138,8 +138,8 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<term><literal>REFRESH PUBLICATION</literal></term>
<listitem>
<para>
- Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ Fetch missing table and sequence information from publisher. This will start
+ replication of tables and sequences that were added to the subscribed-to publications
since the last invocation of <command>REFRESH PUBLICATION</command> or
since <command>CREATE SUBSCRIPTION</command>.
</para>
@@ -156,7 +156,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
Specifies whether the existing data in the publications that are
being subscribed to should be copied once the replication starts.
The default is <literal>true</literal>. (Previously subscribed
- tables are not copied.)
+ tables and sequences are not copied.)
</para>
</listitem>
</varlistentry>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 86e415af89..427582f36a 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -51,26 +51,27 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("\"%s\" is not a table",
+ errmsg("\"%s\" is not a table or sequence",
RelationGetRelationName(targetrel)),
- errdetail("Only tables can be added to publications.")));
+ errdetail("Only tables and sequences can be added to publications.")));
- /* Can't be system table */
+ /* Can't be system table/sequence */
if (IsCatalogRelation(targetrel))
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("\"%s\" is a system table",
+ errmsg("\"%s\" is a system table or sequence",
RelationGetRelationName(targetrel)),
- errdetail("System tables cannot be added to publications.")));
+ errdetail("System tables / sequences cannot be added to publications.")));
/* UNLOGGED and TEMP relations cannot be part of publication. */
if (!RelationIsPermanent(targetrel))
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("table \"%s\" cannot be replicated",
+ errmsg("table or sequence \"%s\" cannot be replicated",
RelationGetRelationName(targetrel)),
errdetail("Temporary and unlogged relations cannot be replicated.")));
}
@@ -98,7 +99,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -271,6 +273,10 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+ /* skip sequences here */
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ continue;
+
if (get_rel_relkind(pubrel->prrelid) == RELKIND_PARTITIONED_TABLE &&
pub_partopt != PUBLICATION_PART_ROOT)
{
@@ -304,6 +310,49 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of relation oids for a publication (sequences only).
+ *
+ * This should only be used for normal publications, the FOR ALL TABLES
+ * should use GetAllSequencesPublicationRelations().
+ */
+List *
+GetPublicationSequenceRelations(Oid pubid)
+{
+ List *result;
+ Relation pubrelsrel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications associated with the relation. */
+ pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_rel_prpubid,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(pubid));
+
+ scan = systable_beginscan(pubrelsrel, PublicationRelPrrelidPrpubidIndexId,
+ true, NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Form_pg_publication_rel pubrel;
+
+ pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ }
+
+ systable_endscan(scan);
+ table_close(pubrelsrel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of publication oids for publications marked as FOR ALL TABLES.
*/
@@ -404,6 +453,46 @@ GetAllTablesPublicationRelations(bool pubviaroot)
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -426,10 +515,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -555,3 +646,56 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ sequences = GetPublicationSequenceRelations(publication->oid);
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 999d984068..16f050023a 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -372,6 +372,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPT.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 95c253c8e0..5e5c5feab4 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -54,6 +55,12 @@ static void PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt);
static void PublicationDropTables(Oid pubid, List *rels, bool missing_ok);
+static List *OpenSequenceList(List *sequences);
+static void CloseSequenceList(List *rels);
+static void PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt);
+static void PublicationDropSequences(Oid pubid, List *rels, bool missing_ok);
+
static void
parse_publication_options(List *options,
bool *publish_given,
@@ -71,6 +78,7 @@ parse_publication_options(List *options,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -97,6 +105,7 @@ parse_publication_options(List *options,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -119,6 +128,8 @@ parse_publication_options(List *options,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -210,6 +221,8 @@ CreatePublication(CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -374,9 +387,9 @@ AlterPublicationTables(AlterPublicationStmt *stmt, Relation rel,
rels = OpenTableList(stmt->tables);
- if (stmt->tableAction == DEFELEM_ADD)
+ if (stmt->action == DEFELEM_ADD)
PublicationAddTables(pubid, rels, false, stmt);
- else if (stmt->tableAction == DEFELEM_DROP)
+ else if (stmt->action == DEFELEM_DROP)
PublicationDropTables(pubid, rels, false);
else /* DEFELEM_SET */
{
@@ -427,6 +440,82 @@ AlterPublicationTables(AlterPublicationStmt *stmt, Relation rel,
CloseTableList(rels);
}
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, Relation rel,
+ HeapTuple tup)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ Assert(list_length(stmt->sequences) > 0);
+
+ rels = OpenSequenceList(stmt->sequences);
+
+ if (stmt->action == DEFELEM_ADD)
+ PublicationAddSequences(pubid, rels, false, stmt);
+ else if (stmt->action == DEFELEM_DROP)
+ PublicationDropSequences(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ Relation newrel = (Relation) lfirst(newlc);
+
+ if (RelationGetRelid(newrel) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+
+ if (!found)
+ {
+ Relation oldrel = relation_open(oldrelid,
+ ShareUpdateExclusiveLock);
+
+ delrels = lappend(delrels, oldrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropSequences(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddSequences(pubid, rels, true, stmt);
+
+ CloseSequenceList(delrels);
+ }
+
+ CloseSequenceList(rels);
+}
+
/*
* Alter the existing publication.
*
@@ -460,8 +549,12 @@ AlterPublication(AlterPublicationStmt *stmt)
if (stmt->options)
AlterPublicationOptions(stmt, rel, tup);
- else
+ else if (stmt->tables)
AlterPublicationTables(stmt, rel, tup);
+ else if (stmt->sequences)
+ AlterPublicationSequences(stmt, rel, tup);
+ else
+ Assert(false);
/* Cleanup. */
heap_freetuple(tup);
@@ -666,6 +759,139 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
}
}
+/*
+ * Open relations specified by a RangeVar list.
+ * The returned tables are locked in ShareUpdateExclusiveLock mode in order to
+ * add them to a publication.
+ */
+static List *
+OpenSequenceList(List *sequences)
+{
+ List *relids = NIL;
+ List *rels = NIL;
+ ListCell *lc;
+
+ /*
+ * Open, share-lock, and check all the explicitly-specified relations
+ */
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = castNode(RangeVar, lfirst(lc));
+ Relation rel;
+ Oid myrelid;
+
+ /* Allow query cancel in case this takes a long time */
+ CHECK_FOR_INTERRUPTS();
+
+ rel = relation_openrv(rv, ShareUpdateExclusiveLock);
+ myrelid = RelationGetRelid(rel);
+
+ /*
+ * Filter out duplicates if user specifies "foo, foo".
+ *
+ * Note that this algorithm is known to not be very efficient (O(N^2))
+ * but given that it only works on list of sequences given to us by user
+ * it's deemed acceptable.
+ */
+ if (list_member_oid(relids, myrelid))
+ {
+ relation_close(rel, ShareUpdateExclusiveLock);
+ continue;
+ }
+
+ rels = lappend(rels, rel);
+ relids = lappend_oid(relids, myrelid);
+ }
+
+ list_free(relids);
+
+ return rels;
+}
+
+/*
+ * Close all relations in the list.
+ */
+static void
+CloseSequenceList(List *rels)
+{
+ ListCell *lc;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+
+ relation_close(rel, NoLock);
+ }
+}
+
+/*
+ * Add listed tables to the publication.
+ */
+static void
+PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt)
+{
+ ListCell *lc;
+
+ Assert(!stmt || !stmt->for_all_sequences);
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ ObjectAddress obj;
+
+ /* Must be owner of the table or superuser. */
+ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind),
+ RelationGetRelationName(rel));
+
+ obj = publication_add_relation(pubid, rel, if_not_exists);
+ if (stmt)
+ {
+ EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
+ (Node *) stmt);
+
+ InvokeObjectPostCreateHook(PublicationRelRelationId,
+ obj.objectId, 0);
+ }
+ }
+}
+
+/*
+ * Remove listed sequences from the publication.
+ */
+static void
+PublicationDropSequences(Oid pubid, List *rels, bool missing_ok)
+{
+ ObjectAddress obj;
+ ListCell *lc;
+ Oid prid;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ Oid relid = RelationGetRelid(rel);
+
+ prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
+ ObjectIdGetDatum(relid),
+ ObjectIdGetDatum(pubid));
+ if (!OidIsValid(prid))
+ {
+ if (missing_ok)
+ continue;
+
+ ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+ errmsg("relation \"%s\" is not part of the publication",
+ RelationGetRelationName(rel))));
+ }
+
+ ObjectAddressSet(obj, PublicationRelRelationId, prid);
+ performDeletion(&obj, DROP_CASCADE, 0);
+ }
+}
+
+
/*
* Internal workhorse for changing a publication owner
*/
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0415df9ccb..6898a45365 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -94,7 +94,7 @@ static HTAB *seqhashtab = NULL; /* hash table for SeqTable items */
*/
static SeqTableData *last_used_seq = NULL;
-static void fill_seq_with_data(Relation rel, HeapTuple tuple);
+static void fill_seq_with_data(Relation rel, HeapTuple tuple, bool create);
static Relation lock_and_open_sequence(SeqTable seq);
static void create_seq_hashtable(void);
static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
@@ -222,7 +222,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
/* now initialize the sequence's data */
tuple = heap_form_tuple(tupDesc, value, null);
- fill_seq_with_data(rel, tuple);
+ fill_seq_with_data(rel, tuple, true);
/* process OWNED BY if given */
if (owned_by)
@@ -327,7 +327,86 @@ ResetSequence(Oid seq_relid)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seq_rel, tuple);
+ fill_seq_with_data(seq_rel, tuple, true);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple, false);
/* Clear local cache so that we don't think we have cached numbers */
/* Note that we do not change the currval() state */
@@ -340,7 +419,7 @@ ResetSequence(Oid seq_relid)
* Initialize a sequence's relation with the specified tuple as content
*/
static void
-fill_seq_with_data(Relation rel, HeapTuple tuple)
+fill_seq_with_data(Relation rel, HeapTuple tuple, bool create)
{
Buffer buf;
Page page;
@@ -378,8 +457,21 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(rel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX Not sure if this is the best solution.
+ */
+ GetCurrentTransactionId();
+ }
+
START_CRIT_SECTION();
MarkBufferDirty(buf);
@@ -399,6 +491,7 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = rel->rd_node;
+ xlrec.created = create;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) tuple->t_data, tuple->t_len);
@@ -502,7 +595,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seqrel, newdatatuple);
+ fill_seq_with_data(seqrel, newdatatuple, true);
}
/* process OWNED BY if given */
@@ -766,8 +859,21 @@ nextval_internal(Oid relid, bool check_permissions)
* (Have to do that here, so we're outside the critical section)
*/
if (logit && RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX Not sure if this is the best solution.
+ */
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -803,6 +909,7 @@ nextval_internal(Oid relid, bool check_permissions)
seq->log_cnt = 0;
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -977,8 +1084,21 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX Not sure if this is the best solution.
+ */
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -999,6 +1119,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
+
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 8aa6de1785..cc54be7a99 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -47,6 +47,7 @@
#include "utils/syscache.h"
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -461,6 +462,7 @@ CreateSubscription(CreateSubscriptionStmt *stmt, bool isTopLevel)
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -498,6 +500,26 @@ CreateSubscription(CreateSubscriptionStmt *stmt, bool isTopLevel)
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -644,6 +666,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -735,6 +761,185 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ elog(WARNING, "B: remove rel %d", relid);
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1534,6 +1739,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 1e285e0349..a1c7880be6 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -619,11 +619,11 @@ CheckSubscriptionRelkind(char relkind, const char *nspname,
errdetail("\"%s.%s\" is a foreign table.",
nspname, relname)));
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
nspname, relname),
- errdetail("\"%s.%s\" is not a table.",
+ errdetail("\"%s.%s\" is not a table or a sequence.",
nspname, relname)));
}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index bd87f23784..86d92de46e 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4833,8 +4833,10 @@ _copyAlterPublicationStmt(const AlterPublicationStmt *from)
COPY_STRING_FIELD(pubname);
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(tables);
+ COPY_NODE_FIELD(sequences);
COPY_SCALAR_FIELD(for_all_tables);
- COPY_SCALAR_FIELD(tableAction);
+ COPY_SCALAR_FIELD(for_all_sequences);
+ COPY_SCALAR_FIELD(action);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index dba3e6b31e..384ec10f9f 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2302,8 +2302,10 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_STRING_FIELD(pubname);
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(tables);
+ COMPARE_NODE_FIELD(sequences);
COMPARE_SCALAR_FIELD(for_all_tables);
- COMPARE_SCALAR_FIELD(tableAction);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
+ COMPARE_SCALAR_FIELD(action);
return true;
}
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index eb24195438..d91dbd325e 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -434,6 +434,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
%type <node> group_by_item empty_grouping_set rollup_clause cube_clause
%type <node> grouping_sets_clause
%type <node> opt_publication_for_tables publication_for_tables
+/* %type <node> opt_publication_for_sequences publication_for_sequences */
%type <list> opt_fdw_options fdw_options
%type <defelt> fdw_option
@@ -9588,6 +9589,7 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
*****************************************************************************/
CreatePublicationStmt:
+ /* CREATE PUBLICATION name opt_publication_for_tables opt_publication_for_sequences opt_definition */
CREATE PUBLICATION name opt_publication_for_tables opt_definition
{
CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
@@ -9622,6 +9624,12 @@ publication_for_tables:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
@@ -9633,6 +9641,12 @@ publication_for_tables:
*
* ALTER PUBLICATION name SET TABLE table [, table2]
*
+ * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]
+ *
*****************************************************************************/
AlterPublicationStmt:
@@ -9648,7 +9662,7 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_ADD;
+ n->action = DEFELEM_ADD;
$$ = (Node *)n;
}
| ALTER PUBLICATION name SET TABLE relation_expr_list
@@ -9656,7 +9670,7 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_SET;
+ n->action = DEFELEM_SET;
$$ = (Node *)n;
}
| ALTER PUBLICATION name DROP TABLE relation_expr_list
@@ -9664,7 +9678,31 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_DROP;
+ n->action = DEFELEM_DROP;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name ADD_P SEQUENCE relation_expr_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_ADD;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name SET SEQUENCE relation_expr_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_SET;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name DROP SEQUENCE relation_expr_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_DROP;
$$ = (Node *)n;
}
;
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 70670169ac..f6296941d9 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -42,6 +42,7 @@
#include "replication/reorderbuffer.h"
#include "replication/snapbuild.h"
#include "storage/standby.h"
+#include "commands/sequence.h"
typedef struct XLogRecordBuffer
{
@@ -74,10 +75,11 @@ static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
bool two_phase);
static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_prepare *parsed);
-
+static void DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
/* common function to decode tuples */
static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);
+static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);
/* helper functions for decoding transactions */
static inline bool FilterPrepare(LogicalDecodingContext *ctx,
@@ -158,6 +160,10 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
DecodeLogicalMsgOp(ctx, &buf);
break;
+ case RM_SEQ_ID:
+ DecodeSequence(ctx, &buf);
+ break;
+
/*
* Rmgrs irrelevant for logical decoding; they describe stuff not
* represented in logical decoding. Add new rmgrs in rmgrlist.h's
@@ -173,7 +179,6 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
case RM_HASH_ID:
case RM_GIN_ID:
case RM_GIST_ID:
- case RM_SEQ_ID:
case RM_SPGIST_ID:
case RM_BRIN_ID:
case RM_COMMIT_TS_ID:
@@ -1315,3 +1320,125 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
(txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
+
+/*
+ * Decode Sequence Tuple
+ */
+static void
+DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
+{
+ int datalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;
+
+ Assert(datalen >= 0);
+
+ tuple->tuple.t_len = datalen + SizeofHeapTupleHeader;;
+
+ ItemPointerSetInvalid(&tuple->tuple.t_self);
+
+ tuple->tuple.t_tableOid = InvalidOid;
+
+ memcpy(((char *) tuple->tuple.t_data),
+ data + sizeof(xl_seq_rec),
+ SizeofHeapTupleHeader);
+
+ memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,
+ data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,
+ datalen);
+}
+
+/*
+ * Handle sequence decode
+ *
+ * Decoding sequences is a bit tricky, because while most sequence actions
+ * are non-transactional (not subject to rollback), some need to be handled
+ * as transactional.
+ *
+ * By default, a sequence increment is non-transactional - we must not queue
+ * it in a transaction as other changes, because the transaction might get
+ * rolled back and we'd discard the increment. The downstream would not be
+ * notified about the increment, which is wrong.
+ *
+ * On the other hand, the sequence may be created in a transaction. In this
+ * case we *should* queue the change as other changes in the transaction,
+ * because we don't want to send the increments for unknown sequence to the
+ * plugin - it might get confused about which sequence it's related to etc.
+ */
+static void
+DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ SnapBuild *builder = ctx->snapshot_builder;
+ ReorderBufferTupleBuf *tuplebuf;
+ RelFileNode target_node;
+ XLogReaderState *r = buf->record;
+ char *tupledata = NULL;
+ Size tuplelen;
+ Size datalen = 0;
+ TransactionId xid = XLogRecGetXid(r);
+ uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
+ xl_seq_rec *xlrec;
+ Snapshot snapshot;
+ RepOriginId origin_id = XLogRecGetOrigin(r);
+ bool transactional;
+
+ /* only decode changes flagged with XLOG_SEQ_LOG */
+ if (info != XLOG_SEQ_LOG)
+ elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
+
+ /*
+ * If we don't have snapshot or we are just fast-forwarding, there is no
+ * point in decoding messages.
+ */
+ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||
+ ctx->fast_forward)
+ return;
+
+ /* only interested in our database */
+ XLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);
+ if (target_node.dbNode != ctx->slot->data.database)
+ return;
+
+ /* output plugin doesn't look for this origin, no need to queue */
+ if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
+ return;
+
+ tupledata = XLogRecGetData(r);
+ datalen = XLogRecGetDataLen(r);
+ tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);
+
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_seq_rec *) XLogRecGetData(r);
+
+ /* XXX how could we have sequence change without data? */
+ if(!datalen || !tupledata)
+ return;
+
+ tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
+ DecodeSeqTuple(tupledata, datalen, tuplebuf);
+
+ /*
+ * Should we handle the sequence increment as transactional or not?
+ *
+ * If the sequence was created in a still-running transaction, treat
+ * it as transactional and queue the increments. Otherwise it needs
+ * to be treated as non-transactional, in which case we send it to
+ * the plugin right away.
+ */
+ transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
+ target_node,
+ xlrec->created);
+
+ /* Skip the change if already processed (per the snapshot). */
+ if (transactional &&
+ !SnapBuildProcessChange(builder, xid, buf->origptr))
+ return;
+ else if (!transactional &&
+ (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
+ SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ return;
+
+ /* Queue the increment (or send immediately if not transactional). */
+ snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
+ ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
+ origin_id, target_node, transactional,
+ xlrec->created, tuplebuf);
+}
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index ffc6160e9f..66a4a4603a 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -73,6 +73,10 @@ static void truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
static void message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
/* streaming callbacks */
static void stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
@@ -217,6 +221,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->apply_truncate = truncate_cb_wrapper;
ctx->reorder->commit = commit_cb_wrapper;
ctx->reorder->message = message_cb_wrapper;
+ ctx->reorder->sequence = sequence_cb_wrapper;
/*
* To support streaming, we require start/stop/abort/commit/change
@@ -1179,6 +1184,43 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ if (ctx->callbacks.sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ created, last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr first_lsn)
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 1cf59e0fb0..f9e928f906 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -389,6 +389,58 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint8(out, created);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->created = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index f96029f15a..d18fd30ced 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -116,6 +116,13 @@ typedef struct ReorderBufferTXNByIdEnt
ReorderBufferTXN *txn;
} ReorderBufferTXNByIdEnt;
+/* entry for hash table we use to track sequences created in running xacts */
+typedef struct ReorderBufferSequenceEnt
+{
+ RelFileNode rnode;
+ TransactionId xid;
+} ReorderBufferSequenceEnt;
+
/* data structures for (relfilenode, ctid) => (cmin, cmax) mapping */
typedef struct ReorderBufferTupleCidKey
{
@@ -337,6 +344,14 @@ ReorderBufferAllocate(void)
buffer->by_txn = hash_create("ReorderBufferByXid", 1000, &hash_ctl,
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+ /* hash table of sequences, mapping relfilenode to XID of transaction */
+ hash_ctl.keysize = sizeof(RelFileNode);
+ hash_ctl.entrysize = sizeof(ReorderBufferSequenceEnt);
+ hash_ctl.hcxt = buffer->context;
+
+ buffer->sequences = hash_create("ReorderBufferSequenceHash", 1000, &hash_ctl,
+ HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+
buffer->by_txn_last_xid = InvalidTransactionId;
buffer->by_txn_last_txn = NULL;
@@ -523,6 +538,13 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
break;
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
+ change->data.sequence.tuple = NULL;
+ }
+ break;
}
pfree(change);
@@ -850,6 +872,212 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
}
}
+/*
+ * Treat the sequence increment as transactional?
+ */
+bool
+ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created)
+{
+ bool found = false;
+
+ if (created)
+ return true;
+
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND,
+ &found);
+
+ return found;
+}
+
+
+/*
+ * A transactional sequence increment is queued to be processed upon commit
+ * and a non-transactional increment gets processed immediately.
+ *
+ * A sequence update may be both transactional and non-transactional. When
+ * created in a running transaction, treat it as transactional and queue
+ * the change in it. Otherwise treat it as non-transactional, so that we
+ * don't forget the increment in case of a rollback.
+ */
+void
+ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf)
+{
+ /*
+ * Change needs to be handled as transactional, because the sequence was
+ * created in a transaction that is still running. In that case all the
+ * changes need to be queued in that transaction, we must not send them
+ * to the downstream until the transaction commits.
+ *
+ * There's a bit of a trouble with subtransactions - we can't queue it
+ * into the subxact, because it might be rolled back and we'd lose the
+ * increment. We need to queue it into the same (sub)xact that created
+ * the sequence, which is why we track the XID in the hash table.
+ */
+ if (transactional)
+ {
+ MemoryContext oldcontext;
+ ReorderBufferChange *change;
+
+ /* lookup sequence by relfilenode */
+ ReorderBufferSequenceEnt *ent;
+ bool found;
+
+ /* transactional changes require a transaction */
+ Assert(xid != InvalidTransactionId);
+
+ /* search the lookup table (we ignore the return value, found is enough) */
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ created ? HASH_ENTER : HASH_FIND,
+ &found);
+
+ /*
+ * If this is the "create" increment, we must not have found any
+ * pre-existing entry in the hash table (i.e. there must not be
+ * any conflicting sequence).
+ */
+ Assert(!(created && found));
+
+ /* But we must have either created or found an existing entry. */
+ Assert(created || found);
+
+ /*
+ * When creating the sequence, remember the XID of the transaction
+ * that created id.
+ */
+ if (created)
+ ent->xid = xid;
+
+ /* XXX Maybe check that we're still in the same top-level xact? */
+
+ /* OK, allocate and queue the change */
+ oldcontext = MemoryContextSwitchTo(rb->context);
+
+ change = ReorderBufferGetChange(rb);
+
+ change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
+ change->origin_id = origin_id;
+
+ memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
+
+ change->data.sequence.created = created;
+ change->data.sequence.tuple = tuplebuf;
+
+ /* add it to the same subxact that created the sequence */
+ ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+ else
+ {
+ /*
+ * This increment is for a sequence that was not created in any
+ * running transaction, so we treat it as non-transactional and
+ * just send it to the output plugin directly.
+ */
+ ReorderBufferTXN *txn = NULL;
+ volatile Snapshot snapshot_now = snapshot;
+ bool using_subtxn;
+
+#ifdef USE_ASSERT_CHECKING
+ /* All "creates" have to be handled as transactional. */
+ Assert(!created);
+
+ /* Make sure the sequence is not in the hash table. */
+ {
+ bool found;
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND, &found);
+ Assert(!found);
+ }
+#endif
+
+ if (xid != InvalidTransactionId)
+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
+
+ /* setup snapshot to allow catalog access */
+ SetupHistoricSnapshot(snapshot_now, NULL);
+
+ /*
+ * Decoding needs access to syscaches et al., which in turn use
+ * heavyweight locks and such. Thus we need to have enough state around to
+ * keep track of those. The easiest way is to simply use a transaction
+ * internally. That also allows us to easily enforce that nothing writes
+ * to the database by checking for xid assignments.
+ *
+ * When we're called via the SQL SRF there's already a transaction
+ * started, so start an explicit subtransaction there.
+ */
+ using_subtxn = IsTransactionOrTransactionBlock();
+
+ PG_TRY();
+ {
+ Relation relation;
+ HeapTuple tuple;
+ bool isnull;
+ int64 last_value, log_cnt;
+ bool is_called;
+ Oid reloid;
+
+ if (using_subtxn)
+ BeginInternalSubTransaction("sequence");
+ else
+ StartTransactionCommand();
+
+ reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(rnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+ tuple = &tuplebuf->tuple;
+
+ /*
+ * Extract the internal sequence values, describing the state.
+ *
+ * XXX Seems a bit strange to access it directly. Maybe there's
+ * a better / more correct way?
+ */
+ last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
+ log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
+ is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+
+ rb->sequence(rb, txn, lsn, relation, transactional, created,
+ last_value, log_cnt, is_called);
+
+ RelationClose(relation);
+
+ TeardownHistoricSnapshot(false);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+ }
+ PG_CATCH();
+ {
+ TeardownHistoricSnapshot(true);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+ }
+}
+
/*
* AssertTXNLsnOrder
* Verify LSN ordering of transaction lists in the reorderbuffer
@@ -1526,6 +1754,31 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
&found);
Assert(found);
+ /*
+ * Remove sequences created in this transaction (if any).
+ *
+ * There's no way to search by XID, so we simply do a seqscan of all
+ * the entries in the hash table. Hopefully there are only a couple
+ * entries in most cases - people generally don't create many new
+ * sequences over and over.
+ */
+ {
+ HASH_SEQ_STATUS scan_status;
+ ReorderBufferSequenceEnt *ent;
+
+ hash_seq_init(&scan_status, rb->sequences);
+ while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
+ {
+ /* skip sequences not from this transaction */
+ if (ent->xid != txn->xid)
+ continue;
+
+ (void) hash_search(rb->sequences,
+ (void *) &(ent->rnode),
+ HASH_REMOVE, NULL);
+ }
+ }
+
/* remove entries spilled to disk */
if (rbtxn_is_serialized(txn))
ReorderBufferRestoreCleanup(rb, txn);
@@ -1941,6 +2194,39 @@ ReorderBufferApplyMessage(ReorderBuffer *rb, ReorderBufferTXN *txn,
change->data.msg.message);
}
+/*
+ * Helper function for ReorderBufferProcessTXN for applying sequences.
+ */
+static inline void
+ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ Relation relation, ReorderBufferChange *change,
+ bool streaming)
+{
+ HeapTuple tuple;
+ bool isnull;
+ int64 last_value, log_cnt;
+ bool is_called;
+
+ /* FIXME support streaming */
+ Assert(!streaming);
+
+ tuple = &change->data.sequence.tuple->tuple;
+
+ /*
+ * Extract the internal sequence values, describing the state.
+ *
+ * XXX Seems a bit strange to access it directly. Maybe there's
+ * a better / more correct way?
+ */
+ last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
+ log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
+ is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+
+ rb->sequence(rb, txn, change->lsn, relation, true, /* gotta be transactional */
+ change->data.sequence.created,
+ last_value, log_cnt, is_called);
+}
+
/*
* Function to store the command id and snapshot at the end of the current
* stream so that we can reuse the same while sending the next stream.
@@ -2357,6 +2643,31 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
elog(ERROR, "tuplecid value in changequeue");
break;
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ Assert(snapshot_now);
+
+ reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
+ change->data.sequence.relnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+
+ if (!RelationIsValid(relation))
+ elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
+ reloid,
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ if (RelationIsLogicallyLogged(relation))
+ ReorderBufferApplySequence(rb, txn, relation, change, streaming);
+
+ RelationClose(relation);
+ break;
}
}
@@ -3751,6 +4062,39 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ char *data;
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
+ /* make sure we have enough space */
+ ReorderBufferSerializeReserve(rb, sz);
+
+ data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
+ /* might have been reallocated above */
+ ondisk = (ReorderBufferDiskChange *) rb->outbuf;
+
+ if (len)
+ {
+ memcpy(data, &tup->tuple, sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ memcpy(data, tup->tuple.t_data, len);
+ data += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4014,6 +4358,22 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4314,6 +4674,30 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ uint32 tuplelen = ((HeapTuple) data)->t_len;
+
+ change->data.sequence.tuple =
+ ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
+
+ /* restore ->tuple */
+ memcpy(&change->data.sequence.tuple->tuple, data,
+ sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ /* reset t_data pointer into the new tuplebuf */
+ change->data.sequence.tuple->tuple.t_data =
+ ReorderBufferTupleBufData(change->data.sequence.tuple);
+
+ /* restore tuple data itself */
+ memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
+ data += tuplelen;
+ }
+ break;
+
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 67f907cdd9..d1ba2a9645 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -99,6 +99,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -353,6 +354,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -870,6 +877,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ elog(WARNING, "sequence %s info last_value %ld log_cnt %ld is_called %d",
+ quote_qualified_identifier(lrel.nspname, lrel.relname),
+ last_value, log_cnt, is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1102,10 +1202,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 4b112593c6..7a84726b85 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -70,6 +70,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -823,6 +824,57 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ elog(WARNING, "applying sequence transactional %d created %d last_value %ld log_cnt %ld is_called %d",
+ seq.transactional, seq.created, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2067,6 +2119,10 @@ apply_dispatch(StringInfo s)
*/
return;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
return;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 63f108f960..c5fc6b3793 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -49,6 +49,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_stream_start(struct LogicalDecodingContext *ctx,
@@ -144,6 +148,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->filter_by_origin_cb = pgoutput_origin_filter;
cb->shutdown_cb = pgoutput_shutdown;
@@ -166,11 +171,13 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
data->binary = false;
data->streaming = false;
data->messages = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -236,6 +243,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -756,6 +773,47 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(rel))
+ return;
+
+ /* XXX handle (in_streaming) here */
+
+ relentry = get_rel_sync_entry(data, RelationGetRelid(rel));
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ rel,
+ xid,
+ sequence_lsn,
+ transactional,
+ created,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1029,7 +1087,8 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->streamed_txns = NIL;
entry->replicate_valid = false;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->publish_as_relid = InvalidOid;
entry->map = NULL; /* will be set by maybe_send_schema() if needed */
}
@@ -1040,6 +1099,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
List *pubids = GetRelationPublications(relid);
ListCell *lc;
Oid publish_as_relid = relid;
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1063,12 +1123,23 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1119,10 +1190,12 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
}
if (entry->pubactions.pubinsert && entry->pubactions.pubupdate &&
- entry->pubactions.pubdelete && entry->pubactions.pubtruncate)
+ entry->pubactions.pubdelete && entry->pubactions.pubtruncate &&
+ entry->pubactions.pubsequence)
break;
}
@@ -1267,5 +1340,6 @@ rel_sync_cache_publication_cb(Datum arg, int cacheid, uint32 hashvalue)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
}
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index fd05615e76..890829bc0c 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5487,6 +5487,7 @@ GetRelationPublicationActions(Relation relation)
pubactions->pubupdate |= pubform->pubupdate;
pubactions->pubdelete |= pubform->pubdelete;
pubactions->pubtruncate |= pubform->pubtruncate;
+ pubactions->pubsequence |= pubform->pubsequence;
ReleaseSysCache(tup);
@@ -5495,7 +5496,8 @@ GetRelationPublicationActions(Relation relation)
* other publications.
*/
if (pubactions->pubinsert && pubactions->pubupdate &&
- pubactions->pubdelete && pubactions->pubtruncate)
+ pubactions->pubdelete && pubactions->pubtruncate &&
+ pubactions->pubsequence)
break;
}
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index bd8e9ea2f8..b423dd4a42 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1640,7 +1640,7 @@ psql_completion(const char *text, int start, int end)
/* ALTER PUBLICATION <name> */
else if (Matches("ALTER", "PUBLICATION", MatchAny))
- COMPLETE_WITH("ADD TABLE", "DROP TABLE", "OWNER TO", "RENAME TO", "SET");
+ COMPLETE_WITH("ADD TABLE", "DROP TABLE", "ADD SEQUENCE", "DROP SEQUENCE", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
COMPLETE_WITH("(", "TABLE");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index acbcae4607..e3043e0f63 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11442,6 +11442,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 1b31fee9e3..d1c9a18b05 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -74,6 +83,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct Publication
@@ -81,6 +91,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -109,6 +120,9 @@ extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
extern List *GetAllTablesPublications(void);
extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
+extern List *GetPublicationSequenceRelations(Oid pubid);
+
extern bool is_publishable_relation(Relation rel);
extern ObjectAddress publication_add_relation(Oid pubid, Relation targetrel,
bool if_not_exists);
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 40544dd4c7..c28e8695cb 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -48,6 +48,7 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
typedef struct xl_seq_rec
{
RelFileNode node;
+ bool created; /* is this a CREATE SEQUENCE */
/* SEQUENCE TUPLE DATA FOLLOWS AT THE END */
} xl_seq_rec;
@@ -59,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index def9651b34..aa602deae5 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3643,8 +3643,10 @@ typedef struct AlterPublicationStmt
/* parameters used for ALTER PUBLICATION ... ADD/DROP TABLE */
List *tables; /* List of tables to add/drop */
+ List *sequences; /* List of sequences to add/drop */
bool for_all_tables; /* Special publication for all tables in db */
- DefElemAction tableAction; /* What action to perform with the tables */
+ bool for_all_sequences; /* Special publication for all tables in db */
+ DefElemAction action; /* What action to perform with the tables/sequences */
} AlterPublicationStmt;
typedef struct CreateSubscriptionStmt
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 55b90c03ea..f3a7d18fca 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -55,6 +55,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_STREAM_START = 'S',
LOGICAL_REP_MSG_STREAM_END = 'E',
LOGICAL_REP_MSG_STREAM_COMMIT = 'c',
@@ -107,6 +108,19 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ bool created;
+ int64 last_value;
+ int64 log_cnt; /* XXX probably not needed? */
+ bool is_called; /* XXX probably not needed? */
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -154,6 +168,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/output_plugin.h b/src/include/replication/output_plugin.h
index 810495ed0e..e3e1b03f32 100644
--- a/src/include/replication/output_plugin.h
+++ b/src/include/replication/output_plugin.h
@@ -88,6 +88,19 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
Size message_size,
const char *message);
+/*
+ * Called for the generic logical decoding sequences.
+ */
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ bool created,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Filter changes by origin.
*/
@@ -219,6 +232,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index 51e7c0348d..a5aec2928a 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -27,6 +27,7 @@ typedef struct PGOutputData
bool binary;
bool streaming;
bool messages;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index 0c6e9d1cb9..5190cdf196 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -63,7 +63,8 @@ enum ReorderBufferChangeType
REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,
- REORDER_BUFFER_CHANGE_TRUNCATE
+ REORDER_BUFFER_CHANGE_TRUNCATE,
+ REORDER_BUFFER_CHANGE_SEQUENCE,
};
/* forward declaration */
@@ -157,6 +158,14 @@ typedef struct ReorderBufferChange
uint32 ninvalidations; /* Number of messages */
SharedInvalidationMessage *invalidations; /* invalidation message */
} inval;
+
+ /* Context data for Sequence changes */
+ struct
+ {
+ RelFileNode relnode;
+ bool created;
+ ReorderBufferTupleBuf *tuple;
+ } sequence;
} data;
/*
@@ -425,6 +434,15 @@ typedef void (*ReorderBufferMessageCB) (ReorderBuffer *rb,
const char *prefix, Size sz,
const char *message);
+/* sequence callback signature */
+typedef void (*ReorderBufferSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* begin prepare callback signature */
typedef void (*ReorderBufferBeginPrepareCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn);
@@ -506,6 +524,12 @@ struct ReorderBuffer
*/
HTAB *by_txn;
+ /*
+ * relfilenode => XID lookup table for sequences created in a transaction
+ * (also includes altered sequences, which assigns new relfilenode)
+ */
+ HTAB *sequences;
+
/*
* Transactions that could be a toplevel xact, ordered by LSN of the first
* record bearing that xid.
@@ -536,6 +560,7 @@ struct ReorderBuffer
ReorderBufferApplyTruncateCB apply_truncate;
ReorderBufferCommitCB commit;
ReorderBufferMessageCB message;
+ ReorderBufferSequenceCB sequence;
/*
* Callbacks to be called when streaming a transaction at prepare time.
@@ -630,6 +655,10 @@ void ReorderBufferQueueChange(ReorderBuffer *, TransactionId,
void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapshot, XLogRecPtr lsn,
bool transactional, const char *prefix,
Size message_size, const char *message);
+void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
@@ -677,4 +706,7 @@ void ReorderBufferSetRestartPoint(ReorderBuffer *, XLogRecPtr ptr);
void StartupReorderBuffer(void);
+bool ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created);
+
#endif
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 63d6ab7a4e..2cbad49f2e 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -158,8 +158,8 @@ DROP TABLE testpub_parted1;
DROP PUBLICATION testpub_forparted, testpub_forparted1;
-- fail - view
CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_view;
-ERROR: "testpub_view" is not a table
-DETAIL: Only tables can be added to publications.
+ERROR: "testpub_view" is not a table or sequence
+DETAIL: Only tables and sequences can be added to publications.
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1, pub_test.testpub_nopk;
RESET client_min_messages;
@@ -180,8 +180,8 @@ Tables:
-- fail - view
ALTER PUBLICATION testpub_default ADD TABLE testpub_view;
-ERROR: "testpub_view" is not a table
-DETAIL: Only tables can be added to publications.
+ERROR: "testpub_view" is not a table or sequence
+DETAIL: Only tables and sequences can be added to publications.
ALTER PUBLICATION testpub_default ADD TABLE testpub_tbl1;
ALTER PUBLICATION testpub_default SET TABLE testpub_tbl1;
ALTER PUBLICATION testpub_default ADD TABLE pub_test.testpub_nopk;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index e5ab11275d..393be871be 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1451,6 +1451,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gpt(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gpt.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
--
2.31.1
A rebased patch, addressing a minor bitrot due to 4daa140a2f5.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 6/23/21 4:14 PM, Tomas Vondra wrote:
A rebased patch, addressing a minor bitrot due to 4daa140a2f5.
Meh, forgot to attach the patch as usual, of course ...
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Logical-decoding-replication-of-sequences-20210623.patchtext/x-patch; charset=UTF-8; name=0001-Logical-decoding-replication-of-sequences-20210623.patchDownload
From ac7ea9f5dbffff288f2731cd318dc9d35b0c4cd2 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas@2ndquadrant.com>
Date: Wed, 23 Jun 2021 16:18:30 +0200
Subject: [PATCH] Logical decoding / replication of sequences
---
contrib/test_decoding/Makefile | 3 +-
contrib/test_decoding/expected/sequence.out | 327 +++++++++++++++
contrib/test_decoding/sql/sequence.sql | 119 ++++++
contrib/test_decoding/test_decoding.c | 40 ++
doc/src/sgml/catalogs.sgml | 71 ++++
doc/src/sgml/ref/alter_publication.sgml | 28 +-
doc/src/sgml/ref/alter_subscription.sgml | 6 +-
src/backend/catalog/pg_publication.c | 160 +++++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 232 ++++++++++-
src/backend/commands/sequence.c | 132 +++++-
src/backend/commands/subscriptioncmds.c | 272 +++++++++++++
src/backend/executor/execReplication.c | 4 +-
src/backend/nodes/copyfuncs.c | 4 +-
src/backend/nodes/equalfuncs.c | 4 +-
src/backend/parser/gram.y | 44 +-
src/backend/replication/logical/decode.c | 131 +++++-
src/backend/replication/logical/logical.c | 42 ++
src/backend/replication/logical/proto.c | 52 +++
.../replication/logical/reorderbuffer.c | 384 ++++++++++++++++++
src/backend/replication/logical/tablesync.c | 118 +++++-
src/backend/replication/logical/worker.c | 56 +++
src/backend/replication/pgoutput/pgoutput.c | 80 +++-
src/backend/utils/cache/relcache.c | 4 +-
src/bin/psql/tab-complete.c | 2 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 14 +
src/include/commands/sequence.h | 2 +
src/include/nodes/parsenodes.h | 4 +-
src/include/replication/logicalproto.h | 20 +
src/include/replication/output_plugin.h | 14 +
src/include/replication/pgoutput.h | 1 +
src/include/replication/reorderbuffer.h | 34 +-
src/test/regress/expected/publication.out | 8 +-
src/test/regress/expected/rules.out | 8 +
35 files changed, 2389 insertions(+), 46 deletions(-)
create mode 100644 contrib/test_decoding/expected/sequence.out
create mode 100644 contrib/test_decoding/sql/sequence.sql
diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index 9a31e0b879..56ddc3abae 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -5,7 +5,8 @@ PGFILEDESC = "test_decoding - example of a logical decoding output plugin"
REGRESS = ddl xact rewrite toast permissions decoding_in_xact \
decoding_into_rel binary prepared replorigin time messages \
- spill slot truncate stream stats twophase twophase_stream
+ spill slot truncate stream stats twophase twophase_stream \
+ sequence
ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \
oldest_xmin snapshot_transfer subxact_without_top concurrent_stream \
twophase_snapshot
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
new file mode 100644
index 0000000000..07f3e3e92c
--- /dev/null
+++ b/contrib/test_decoding/expected/sequence.out
@@ -0,0 +1,327 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+ ?column?
+----------
+ init
+(1 row)
+
+CREATE SEQUENCE test_sequence;
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4
+(1 row)
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 14
+(1 row)
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4000
+(1 row)
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, true);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, false);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3500
+(1 row)
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ COMMIT
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 4, log_cnt: 0 is_called: 1
+ COMMIT
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 334, log_cnt: 0 is_called: 1
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 14, log_cnt: 32 is_called: 1
+ COMMIT
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 14, log_cnt: 0 is_called: 1
+ COMMIT
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 4000, log_cnt: 0 is_called: 0
+ COMMIT
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 4320, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3500, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3830, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3500, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3830, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3500, log_cnt: 0 is_called: 0
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3820, log_cnt: 0 is_called: 1
+(24 rows)
+
+DROP SEQUENCE test_sequence;
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3002
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3003
+(1 row)
+
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 6001
+(1 row)
+
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3003 | 30 | t
+(1 row)
+
+SELECT nextval('test_table_a_seq');
+ nextval
+---------
+ 3004
+(1 row)
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+---------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_table_a_seq transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ COMMIT
+ sequence: public.test_table_a_seq transactional: 0 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ sequence: public.test_table_a_seq transactional: 0 created: 0 last_value: 3000, log_cnt: 0 is_called: 1
+ sequence: public.test_table_a_seq transactional: 0 created: 0 last_value: 3033, log_cnt: 0 is_called: 1
+ BEGIN
+ COMMIT
+(8 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+-------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_table_a_seq transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ sequence: public.test_table_a_seq transactional: 1 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ COMMIT
+(6 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ setval
+--------
+ 5000
+(1 row)
+
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3001 | 32 | t
+(1 row)
+
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ sequence: public.test_sequence transactional: 1 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 1 created: 0 last_value: 3000, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 1 created: 0 last_value: 3033, log_cnt: 0 is_called: 1
+ COMMIT
+(6 rows)
+
+SELECT pg_drop_replication_slot('regression_slot');
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
new file mode 100644
index 0000000000..d8a34738f3
--- /dev/null
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -0,0 +1,119 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+
+CREATE SEQUENCE test_sequence;
+
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, true);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, false);
+SELECT nextval('test_sequence');
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+DROP SEQUENCE test_sequence;
+
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+SELECT nextval('test_table_a_seq');
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index de1b692658..162e3c18f0 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -35,6 +35,7 @@ typedef struct
bool include_timestamp;
bool skip_empty_xacts;
bool only_local;
+ bool skip_sequences;
} TestDecodingData;
/*
@@ -76,6 +77,10 @@ static void pg_decode_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
TransactionId xid,
const char *gid);
@@ -141,6 +146,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->filter_by_origin_cb = pg_decode_filter;
cb->shutdown_cb = pg_decode_shutdown;
cb->message_cb = pg_decode_message;
+ cb->sequence_cb = pg_decode_sequence;
cb->filter_prepare_cb = pg_decode_filter_prepare;
cb->begin_prepare_cb = pg_decode_begin_prepare_txn;
cb->prepare_cb = pg_decode_prepare_txn;
@@ -175,6 +181,9 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
data->skip_empty_xacts = false;
data->only_local = false;
+ /* skip sequences by default for backwards compatibility */
+ data->skip_sequences = true;
+
ctx->output_plugin_private = data;
opt->output_type = OUTPUT_PLUGIN_TEXTUAL_OUTPUT;
@@ -265,6 +274,17 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "skip-sequences") == 0)
+ {
+
+ if (elem->arg == NULL)
+ continue; /* true by default */
+ else if (!parse_bool(strVal(elem->arg), &data->skip_sequences))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not parse value \"%s\" for parameter \"%s\"",
+ strVal(elem->arg), elem->defname)));
+ }
else
{
ereport(ERROR,
@@ -744,6 +764,26 @@ pg_decode_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+ char *nspname = get_namespace_name(RelationGetNamespace(rel));
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfo(ctx->out, "sequence: %s.%s transactional: %d created: %d last_value: %zu, log_cnt: %zu is_called: %d",
+ nspname, RelationGetRelationName(rel),
+ transactional, created, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
static void
pg_decode_stream_start(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn)
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index f517a7d4af..faaa95f56c 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -9428,6 +9428,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11262,6 +11267,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index faa114b2c6..c68a1573de 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -24,6 +24,9 @@ PostgreSQL documentation
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> ADD TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> DROP TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> ADD SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> DROP SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> OWNER TO { <replaceable>new_owner</replaceable> | CURRENT_ROLE | CURRENT_USER | SESSION_USER }
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <replaceable>new_name</replaceable>
@@ -50,7 +53,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -62,7 +76,8 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<para>
You must own the publication to use <command>ALTER PUBLICATION</command>.
- Adding a table to a publication additionally requires owning that table.
+ Adding a table to a publication additionally requires owning that table,
+ and the same requirement applies to sequences.
To alter the owner, you must also be a direct or indirect member of the new
owning role. The new owner must have <literal>CREATE</literal> privilege on
the database. Also, the new owner of a <literal>FOR ALL TABLES</literal>
@@ -97,6 +112,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 367ac814f4..3412ca68e9 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -138,8 +138,8 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<term><literal>REFRESH PUBLICATION</literal></term>
<listitem>
<para>
- Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ Fetch missing table and sequence information from publisher. This will start
+ replication of tables and sequences that were added to the subscribed-to publications
since the last invocation of <command>REFRESH PUBLICATION</command> or
since <command>CREATE SUBSCRIPTION</command>.
</para>
@@ -156,7 +156,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
Specifies whether the existing data in the publications that are
being subscribed to should be copied once the replication starts.
The default is <literal>true</literal>. (Previously subscribed
- tables are not copied.)
+ tables and sequences are not copied.)
</para>
</listitem>
</varlistentry>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 86e415af89..427582f36a 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -51,26 +51,27 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("\"%s\" is not a table",
+ errmsg("\"%s\" is not a table or sequence",
RelationGetRelationName(targetrel)),
- errdetail("Only tables can be added to publications.")));
+ errdetail("Only tables and sequences can be added to publications.")));
- /* Can't be system table */
+ /* Can't be system table/sequence */
if (IsCatalogRelation(targetrel))
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("\"%s\" is a system table",
+ errmsg("\"%s\" is a system table or sequence",
RelationGetRelationName(targetrel)),
- errdetail("System tables cannot be added to publications.")));
+ errdetail("System tables / sequences cannot be added to publications.")));
/* UNLOGGED and TEMP relations cannot be part of publication. */
if (!RelationIsPermanent(targetrel))
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("table \"%s\" cannot be replicated",
+ errmsg("table or sequence \"%s\" cannot be replicated",
RelationGetRelationName(targetrel)),
errdetail("Temporary and unlogged relations cannot be replicated.")));
}
@@ -98,7 +99,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -271,6 +273,10 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+ /* skip sequences here */
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ continue;
+
if (get_rel_relkind(pubrel->prrelid) == RELKIND_PARTITIONED_TABLE &&
pub_partopt != PUBLICATION_PART_ROOT)
{
@@ -304,6 +310,49 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of relation oids for a publication (sequences only).
+ *
+ * This should only be used for normal publications, the FOR ALL TABLES
+ * should use GetAllSequencesPublicationRelations().
+ */
+List *
+GetPublicationSequenceRelations(Oid pubid)
+{
+ List *result;
+ Relation pubrelsrel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications associated with the relation. */
+ pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_rel_prpubid,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(pubid));
+
+ scan = systable_beginscan(pubrelsrel, PublicationRelPrrelidPrpubidIndexId,
+ true, NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Form_pg_publication_rel pubrel;
+
+ pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ }
+
+ systable_endscan(scan);
+ table_close(pubrelsrel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of publication oids for publications marked as FOR ALL TABLES.
*/
@@ -404,6 +453,46 @@ GetAllTablesPublicationRelations(bool pubviaroot)
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -426,10 +515,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -555,3 +646,56 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ sequences = GetPublicationSequenceRelations(publication->oid);
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 999d984068..16f050023a 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -372,6 +372,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPT.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 95c253c8e0..5e5c5feab4 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -54,6 +55,12 @@ static void PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt);
static void PublicationDropTables(Oid pubid, List *rels, bool missing_ok);
+static List *OpenSequenceList(List *sequences);
+static void CloseSequenceList(List *rels);
+static void PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt);
+static void PublicationDropSequences(Oid pubid, List *rels, bool missing_ok);
+
static void
parse_publication_options(List *options,
bool *publish_given,
@@ -71,6 +78,7 @@ parse_publication_options(List *options,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -97,6 +105,7 @@ parse_publication_options(List *options,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -119,6 +128,8 @@ parse_publication_options(List *options,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -210,6 +221,8 @@ CreatePublication(CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -374,9 +387,9 @@ AlterPublicationTables(AlterPublicationStmt *stmt, Relation rel,
rels = OpenTableList(stmt->tables);
- if (stmt->tableAction == DEFELEM_ADD)
+ if (stmt->action == DEFELEM_ADD)
PublicationAddTables(pubid, rels, false, stmt);
- else if (stmt->tableAction == DEFELEM_DROP)
+ else if (stmt->action == DEFELEM_DROP)
PublicationDropTables(pubid, rels, false);
else /* DEFELEM_SET */
{
@@ -427,6 +440,82 @@ AlterPublicationTables(AlterPublicationStmt *stmt, Relation rel,
CloseTableList(rels);
}
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, Relation rel,
+ HeapTuple tup)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ Assert(list_length(stmt->sequences) > 0);
+
+ rels = OpenSequenceList(stmt->sequences);
+
+ if (stmt->action == DEFELEM_ADD)
+ PublicationAddSequences(pubid, rels, false, stmt);
+ else if (stmt->action == DEFELEM_DROP)
+ PublicationDropSequences(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ Relation newrel = (Relation) lfirst(newlc);
+
+ if (RelationGetRelid(newrel) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+
+ if (!found)
+ {
+ Relation oldrel = relation_open(oldrelid,
+ ShareUpdateExclusiveLock);
+
+ delrels = lappend(delrels, oldrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropSequences(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddSequences(pubid, rels, true, stmt);
+
+ CloseSequenceList(delrels);
+ }
+
+ CloseSequenceList(rels);
+}
+
/*
* Alter the existing publication.
*
@@ -460,8 +549,12 @@ AlterPublication(AlterPublicationStmt *stmt)
if (stmt->options)
AlterPublicationOptions(stmt, rel, tup);
- else
+ else if (stmt->tables)
AlterPublicationTables(stmt, rel, tup);
+ else if (stmt->sequences)
+ AlterPublicationSequences(stmt, rel, tup);
+ else
+ Assert(false);
/* Cleanup. */
heap_freetuple(tup);
@@ -666,6 +759,139 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
}
}
+/*
+ * Open relations specified by a RangeVar list.
+ * The returned tables are locked in ShareUpdateExclusiveLock mode in order to
+ * add them to a publication.
+ */
+static List *
+OpenSequenceList(List *sequences)
+{
+ List *relids = NIL;
+ List *rels = NIL;
+ ListCell *lc;
+
+ /*
+ * Open, share-lock, and check all the explicitly-specified relations
+ */
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = castNode(RangeVar, lfirst(lc));
+ Relation rel;
+ Oid myrelid;
+
+ /* Allow query cancel in case this takes a long time */
+ CHECK_FOR_INTERRUPTS();
+
+ rel = relation_openrv(rv, ShareUpdateExclusiveLock);
+ myrelid = RelationGetRelid(rel);
+
+ /*
+ * Filter out duplicates if user specifies "foo, foo".
+ *
+ * Note that this algorithm is known to not be very efficient (O(N^2))
+ * but given that it only works on list of sequences given to us by user
+ * it's deemed acceptable.
+ */
+ if (list_member_oid(relids, myrelid))
+ {
+ relation_close(rel, ShareUpdateExclusiveLock);
+ continue;
+ }
+
+ rels = lappend(rels, rel);
+ relids = lappend_oid(relids, myrelid);
+ }
+
+ list_free(relids);
+
+ return rels;
+}
+
+/*
+ * Close all relations in the list.
+ */
+static void
+CloseSequenceList(List *rels)
+{
+ ListCell *lc;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+
+ relation_close(rel, NoLock);
+ }
+}
+
+/*
+ * Add listed tables to the publication.
+ */
+static void
+PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt)
+{
+ ListCell *lc;
+
+ Assert(!stmt || !stmt->for_all_sequences);
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ ObjectAddress obj;
+
+ /* Must be owner of the table or superuser. */
+ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind),
+ RelationGetRelationName(rel));
+
+ obj = publication_add_relation(pubid, rel, if_not_exists);
+ if (stmt)
+ {
+ EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
+ (Node *) stmt);
+
+ InvokeObjectPostCreateHook(PublicationRelRelationId,
+ obj.objectId, 0);
+ }
+ }
+}
+
+/*
+ * Remove listed sequences from the publication.
+ */
+static void
+PublicationDropSequences(Oid pubid, List *rels, bool missing_ok)
+{
+ ObjectAddress obj;
+ ListCell *lc;
+ Oid prid;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ Oid relid = RelationGetRelid(rel);
+
+ prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
+ ObjectIdGetDatum(relid),
+ ObjectIdGetDatum(pubid));
+ if (!OidIsValid(prid))
+ {
+ if (missing_ok)
+ continue;
+
+ ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+ errmsg("relation \"%s\" is not part of the publication",
+ RelationGetRelationName(rel))));
+ }
+
+ ObjectAddressSet(obj, PublicationRelRelationId, prid);
+ performDeletion(&obj, DROP_CASCADE, 0);
+ }
+}
+
+
/*
* Internal workhorse for changing a publication owner
*/
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0415df9ccb..6898a45365 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -94,7 +94,7 @@ static HTAB *seqhashtab = NULL; /* hash table for SeqTable items */
*/
static SeqTableData *last_used_seq = NULL;
-static void fill_seq_with_data(Relation rel, HeapTuple tuple);
+static void fill_seq_with_data(Relation rel, HeapTuple tuple, bool create);
static Relation lock_and_open_sequence(SeqTable seq);
static void create_seq_hashtable(void);
static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
@@ -222,7 +222,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
/* now initialize the sequence's data */
tuple = heap_form_tuple(tupDesc, value, null);
- fill_seq_with_data(rel, tuple);
+ fill_seq_with_data(rel, tuple, true);
/* process OWNED BY if given */
if (owned_by)
@@ -327,7 +327,86 @@ ResetSequence(Oid seq_relid)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seq_rel, tuple);
+ fill_seq_with_data(seq_rel, tuple, true);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple, false);
/* Clear local cache so that we don't think we have cached numbers */
/* Note that we do not change the currval() state */
@@ -340,7 +419,7 @@ ResetSequence(Oid seq_relid)
* Initialize a sequence's relation with the specified tuple as content
*/
static void
-fill_seq_with_data(Relation rel, HeapTuple tuple)
+fill_seq_with_data(Relation rel, HeapTuple tuple, bool create)
{
Buffer buf;
Page page;
@@ -378,8 +457,21 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(rel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX Not sure if this is the best solution.
+ */
+ GetCurrentTransactionId();
+ }
+
START_CRIT_SECTION();
MarkBufferDirty(buf);
@@ -399,6 +491,7 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = rel->rd_node;
+ xlrec.created = create;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) tuple->t_data, tuple->t_len);
@@ -502,7 +595,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seqrel, newdatatuple);
+ fill_seq_with_data(seqrel, newdatatuple, true);
}
/* process OWNED BY if given */
@@ -766,8 +859,21 @@ nextval_internal(Oid relid, bool check_permissions)
* (Have to do that here, so we're outside the critical section)
*/
if (logit && RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX Not sure if this is the best solution.
+ */
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -803,6 +909,7 @@ nextval_internal(Oid relid, bool check_permissions)
seq->log_cnt = 0;
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -977,8 +1084,21 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX Not sure if this is the best solution.
+ */
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -999,6 +1119,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
+
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 75e195f286..bd27481cc5 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -47,6 +47,7 @@
#include "utils/syscache.h"
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -461,6 +462,7 @@ CreateSubscription(CreateSubscriptionStmt *stmt, bool isTopLevel)
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -499,6 +501,26 @@ CreateSubscription(CreateSubscriptionStmt *stmt, bool isTopLevel)
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -646,6 +668,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -737,6 +763,183 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1538,6 +1741,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 1e285e0349..a1c7880be6 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -619,11 +619,11 @@ CheckSubscriptionRelkind(char relkind, const char *nspname,
errdetail("\"%s.%s\" is a foreign table.",
nspname, relname)));
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
nspname, relname),
- errdetail("\"%s.%s\" is not a table.",
+ errdetail("\"%s.%s\" is not a table or a sequence.",
nspname, relname)));
}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index bd87f23784..86d92de46e 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4833,8 +4833,10 @@ _copyAlterPublicationStmt(const AlterPublicationStmt *from)
COPY_STRING_FIELD(pubname);
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(tables);
+ COPY_NODE_FIELD(sequences);
COPY_SCALAR_FIELD(for_all_tables);
- COPY_SCALAR_FIELD(tableAction);
+ COPY_SCALAR_FIELD(for_all_sequences);
+ COPY_SCALAR_FIELD(action);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index dba3e6b31e..384ec10f9f 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2302,8 +2302,10 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_STRING_FIELD(pubname);
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(tables);
+ COMPARE_NODE_FIELD(sequences);
COMPARE_SCALAR_FIELD(for_all_tables);
- COMPARE_SCALAR_FIELD(tableAction);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
+ COMPARE_SCALAR_FIELD(action);
return true;
}
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index eb24195438..d91dbd325e 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -434,6 +434,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
%type <node> group_by_item empty_grouping_set rollup_clause cube_clause
%type <node> grouping_sets_clause
%type <node> opt_publication_for_tables publication_for_tables
+/* %type <node> opt_publication_for_sequences publication_for_sequences */
%type <list> opt_fdw_options fdw_options
%type <defelt> fdw_option
@@ -9588,6 +9589,7 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
*****************************************************************************/
CreatePublicationStmt:
+ /* CREATE PUBLICATION name opt_publication_for_tables opt_publication_for_sequences opt_definition */
CREATE PUBLICATION name opt_publication_for_tables opt_definition
{
CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
@@ -9622,6 +9624,12 @@ publication_for_tables:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
@@ -9633,6 +9641,12 @@ publication_for_tables:
*
* ALTER PUBLICATION name SET TABLE table [, table2]
*
+ * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]
+ *
*****************************************************************************/
AlterPublicationStmt:
@@ -9648,7 +9662,7 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_ADD;
+ n->action = DEFELEM_ADD;
$$ = (Node *)n;
}
| ALTER PUBLICATION name SET TABLE relation_expr_list
@@ -9656,7 +9670,7 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_SET;
+ n->action = DEFELEM_SET;
$$ = (Node *)n;
}
| ALTER PUBLICATION name DROP TABLE relation_expr_list
@@ -9664,7 +9678,31 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_DROP;
+ n->action = DEFELEM_DROP;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name ADD_P SEQUENCE relation_expr_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_ADD;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name SET SEQUENCE relation_expr_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_SET;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name DROP SEQUENCE relation_expr_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_DROP;
$$ = (Node *)n;
}
;
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 453efc51e1..fb90fda48a 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -42,6 +42,7 @@
#include "replication/reorderbuffer.h"
#include "replication/snapbuild.h"
#include "storage/standby.h"
+#include "commands/sequence.h"
typedef struct XLogRecordBuffer
{
@@ -74,10 +75,11 @@ static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
bool two_phase);
static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_prepare *parsed);
-
+static void DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
/* common function to decode tuples */
static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);
+static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);
/* helper functions for decoding transactions */
static inline bool FilterPrepare(LogicalDecodingContext *ctx,
@@ -158,6 +160,10 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
DecodeLogicalMsgOp(ctx, &buf);
break;
+ case RM_SEQ_ID:
+ DecodeSequence(ctx, &buf);
+ break;
+
/*
* Rmgrs irrelevant for logical decoding; they describe stuff not
* represented in logical decoding. Add new rmgrs in rmgrlist.h's
@@ -173,7 +179,6 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
case RM_HASH_ID:
case RM_GIN_ID:
case RM_GIST_ID:
- case RM_SEQ_ID:
case RM_SPGIST_ID:
case RM_BRIN_ID:
case RM_COMMIT_TS_ID:
@@ -1313,3 +1318,125 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
(txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
+
+/*
+ * Decode Sequence Tuple
+ */
+static void
+DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
+{
+ int datalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;
+
+ Assert(datalen >= 0);
+
+ tuple->tuple.t_len = datalen + SizeofHeapTupleHeader;;
+
+ ItemPointerSetInvalid(&tuple->tuple.t_self);
+
+ tuple->tuple.t_tableOid = InvalidOid;
+
+ memcpy(((char *) tuple->tuple.t_data),
+ data + sizeof(xl_seq_rec),
+ SizeofHeapTupleHeader);
+
+ memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,
+ data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,
+ datalen);
+}
+
+/*
+ * Handle sequence decode
+ *
+ * Decoding sequences is a bit tricky, because while most sequence actions
+ * are non-transactional (not subject to rollback), some need to be handled
+ * as transactional.
+ *
+ * By default, a sequence increment is non-transactional - we must not queue
+ * it in a transaction as other changes, because the transaction might get
+ * rolled back and we'd discard the increment. The downstream would not be
+ * notified about the increment, which is wrong.
+ *
+ * On the other hand, the sequence may be created in a transaction. In this
+ * case we *should* queue the change as other changes in the transaction,
+ * because we don't want to send the increments for unknown sequence to the
+ * plugin - it might get confused about which sequence it's related to etc.
+ */
+static void
+DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ SnapBuild *builder = ctx->snapshot_builder;
+ ReorderBufferTupleBuf *tuplebuf;
+ RelFileNode target_node;
+ XLogReaderState *r = buf->record;
+ char *tupledata = NULL;
+ Size tuplelen;
+ Size datalen = 0;
+ TransactionId xid = XLogRecGetXid(r);
+ uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
+ xl_seq_rec *xlrec;
+ Snapshot snapshot;
+ RepOriginId origin_id = XLogRecGetOrigin(r);
+ bool transactional;
+
+ /* only decode changes flagged with XLOG_SEQ_LOG */
+ if (info != XLOG_SEQ_LOG)
+ elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
+
+ /*
+ * If we don't have snapshot or we are just fast-forwarding, there is no
+ * point in decoding messages.
+ */
+ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||
+ ctx->fast_forward)
+ return;
+
+ /* only interested in our database */
+ XLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);
+ if (target_node.dbNode != ctx->slot->data.database)
+ return;
+
+ /* output plugin doesn't look for this origin, no need to queue */
+ if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
+ return;
+
+ tupledata = XLogRecGetData(r);
+ datalen = XLogRecGetDataLen(r);
+ tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);
+
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_seq_rec *) XLogRecGetData(r);
+
+ /* XXX how could we have sequence change without data? */
+ if(!datalen || !tupledata)
+ return;
+
+ tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
+ DecodeSeqTuple(tupledata, datalen, tuplebuf);
+
+ /*
+ * Should we handle the sequence increment as transactional or not?
+ *
+ * If the sequence was created in a still-running transaction, treat
+ * it as transactional and queue the increments. Otherwise it needs
+ * to be treated as non-transactional, in which case we send it to
+ * the plugin right away.
+ */
+ transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
+ target_node,
+ xlrec->created);
+
+ /* Skip the change if already processed (per the snapshot). */
+ if (transactional &&
+ !SnapBuildProcessChange(builder, xid, buf->origptr))
+ return;
+ else if (!transactional &&
+ (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
+ SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ return;
+
+ /* Queue the increment (or send immediately if not transactional). */
+ snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
+ ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
+ origin_id, target_node, transactional,
+ xlrec->created, tuplebuf);
+}
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index ffc6160e9f..66a4a4603a 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -73,6 +73,10 @@ static void truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
static void message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
/* streaming callbacks */
static void stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
@@ -217,6 +221,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->apply_truncate = truncate_cb_wrapper;
ctx->reorder->commit = commit_cb_wrapper;
ctx->reorder->message = message_cb_wrapper;
+ ctx->reorder->sequence = sequence_cb_wrapper;
/*
* To support streaming, we require start/stop/abort/commit/change
@@ -1179,6 +1184,43 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ if (ctx->callbacks.sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ created, last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr first_lsn)
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 1cf59e0fb0..f9e928f906 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -389,6 +389,58 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint8(out, created);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->created = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 19e96f3fd9..4b95586381 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -116,6 +116,13 @@ typedef struct ReorderBufferTXNByIdEnt
ReorderBufferTXN *txn;
} ReorderBufferTXNByIdEnt;
+/* entry for hash table we use to track sequences created in running xacts */
+typedef struct ReorderBufferSequenceEnt
+{
+ RelFileNode rnode;
+ TransactionId xid;
+} ReorderBufferSequenceEnt;
+
/* data structures for (relfilenode, ctid) => (cmin, cmax) mapping */
typedef struct ReorderBufferTupleCidKey
{
@@ -337,6 +344,14 @@ ReorderBufferAllocate(void)
buffer->by_txn = hash_create("ReorderBufferByXid", 1000, &hash_ctl,
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+ /* hash table of sequences, mapping relfilenode to XID of transaction */
+ hash_ctl.keysize = sizeof(RelFileNode);
+ hash_ctl.entrysize = sizeof(ReorderBufferSequenceEnt);
+ hash_ctl.hcxt = buffer->context;
+
+ buffer->sequences = hash_create("ReorderBufferSequenceHash", 1000, &hash_ctl,
+ HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+
buffer->by_txn_last_xid = InvalidTransactionId;
buffer->by_txn_last_txn = NULL;
@@ -527,6 +542,13 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
break;
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
+ change->data.sequence.tuple = NULL;
+ }
+ break;
}
pfree(change);
@@ -854,6 +876,212 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
}
}
+/*
+ * Treat the sequence increment as transactional?
+ */
+bool
+ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created)
+{
+ bool found = false;
+
+ if (created)
+ return true;
+
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND,
+ &found);
+
+ return found;
+}
+
+
+/*
+ * A transactional sequence increment is queued to be processed upon commit
+ * and a non-transactional increment gets processed immediately.
+ *
+ * A sequence update may be both transactional and non-transactional. When
+ * created in a running transaction, treat it as transactional and queue
+ * the change in it. Otherwise treat it as non-transactional, so that we
+ * don't forget the increment in case of a rollback.
+ */
+void
+ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf)
+{
+ /*
+ * Change needs to be handled as transactional, because the sequence was
+ * created in a transaction that is still running. In that case all the
+ * changes need to be queued in that transaction, we must not send them
+ * to the downstream until the transaction commits.
+ *
+ * There's a bit of a trouble with subtransactions - we can't queue it
+ * into the subxact, because it might be rolled back and we'd lose the
+ * increment. We need to queue it into the same (sub)xact that created
+ * the sequence, which is why we track the XID in the hash table.
+ */
+ if (transactional)
+ {
+ MemoryContext oldcontext;
+ ReorderBufferChange *change;
+
+ /* lookup sequence by relfilenode */
+ ReorderBufferSequenceEnt *ent;
+ bool found;
+
+ /* transactional changes require a transaction */
+ Assert(xid != InvalidTransactionId);
+
+ /* search the lookup table (we ignore the return value, found is enough) */
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ created ? HASH_ENTER : HASH_FIND,
+ &found);
+
+ /*
+ * If this is the "create" increment, we must not have found any
+ * pre-existing entry in the hash table (i.e. there must not be
+ * any conflicting sequence).
+ */
+ Assert(!(created && found));
+
+ /* But we must have either created or found an existing entry. */
+ Assert(created || found);
+
+ /*
+ * When creating the sequence, remember the XID of the transaction
+ * that created id.
+ */
+ if (created)
+ ent->xid = xid;
+
+ /* XXX Maybe check that we're still in the same top-level xact? */
+
+ /* OK, allocate and queue the change */
+ oldcontext = MemoryContextSwitchTo(rb->context);
+
+ change = ReorderBufferGetChange(rb);
+
+ change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
+ change->origin_id = origin_id;
+
+ memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
+
+ change->data.sequence.created = created;
+ change->data.sequence.tuple = tuplebuf;
+
+ /* add it to the same subxact that created the sequence */
+ ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+ else
+ {
+ /*
+ * This increment is for a sequence that was not created in any
+ * running transaction, so we treat it as non-transactional and
+ * just send it to the output plugin directly.
+ */
+ ReorderBufferTXN *txn = NULL;
+ volatile Snapshot snapshot_now = snapshot;
+ bool using_subtxn;
+
+#ifdef USE_ASSERT_CHECKING
+ /* All "creates" have to be handled as transactional. */
+ Assert(!created);
+
+ /* Make sure the sequence is not in the hash table. */
+ {
+ bool found;
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND, &found);
+ Assert(!found);
+ }
+#endif
+
+ if (xid != InvalidTransactionId)
+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
+
+ /* setup snapshot to allow catalog access */
+ SetupHistoricSnapshot(snapshot_now, NULL);
+
+ /*
+ * Decoding needs access to syscaches et al., which in turn use
+ * heavyweight locks and such. Thus we need to have enough state around to
+ * keep track of those. The easiest way is to simply use a transaction
+ * internally. That also allows us to easily enforce that nothing writes
+ * to the database by checking for xid assignments.
+ *
+ * When we're called via the SQL SRF there's already a transaction
+ * started, so start an explicit subtransaction there.
+ */
+ using_subtxn = IsTransactionOrTransactionBlock();
+
+ PG_TRY();
+ {
+ Relation relation;
+ HeapTuple tuple;
+ bool isnull;
+ int64 last_value, log_cnt;
+ bool is_called;
+ Oid reloid;
+
+ if (using_subtxn)
+ BeginInternalSubTransaction("sequence");
+ else
+ StartTransactionCommand();
+
+ reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(rnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+ tuple = &tuplebuf->tuple;
+
+ /*
+ * Extract the internal sequence values, describing the state.
+ *
+ * XXX Seems a bit strange to access it directly. Maybe there's
+ * a better / more correct way?
+ */
+ last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
+ log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
+ is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+
+ rb->sequence(rb, txn, lsn, relation, transactional, created,
+ last_value, log_cnt, is_called);
+
+ RelationClose(relation);
+
+ TeardownHistoricSnapshot(false);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+ }
+ PG_CATCH();
+ {
+ TeardownHistoricSnapshot(true);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+ }
+}
+
/*
* AssertTXNLsnOrder
* Verify LSN ordering of transaction lists in the reorderbuffer
@@ -1530,6 +1758,31 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
&found);
Assert(found);
+ /*
+ * Remove sequences created in this transaction (if any).
+ *
+ * There's no way to search by XID, so we simply do a seqscan of all
+ * the entries in the hash table. Hopefully there are only a couple
+ * entries in most cases - people generally don't create many new
+ * sequences over and over.
+ */
+ {
+ HASH_SEQ_STATUS scan_status;
+ ReorderBufferSequenceEnt *ent;
+
+ hash_seq_init(&scan_status, rb->sequences);
+ while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
+ {
+ /* skip sequences not from this transaction */
+ if (ent->xid != txn->xid)
+ continue;
+
+ (void) hash_search(rb->sequences,
+ (void *) &(ent->rnode),
+ HASH_REMOVE, NULL);
+ }
+ }
+
/* remove entries spilled to disk */
if (rbtxn_is_serialized(txn))
ReorderBufferRestoreCleanup(rb, txn);
@@ -1945,6 +2198,39 @@ ReorderBufferApplyMessage(ReorderBuffer *rb, ReorderBufferTXN *txn,
change->data.msg.message);
}
+/*
+ * Helper function for ReorderBufferProcessTXN for applying sequences.
+ */
+static inline void
+ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ Relation relation, ReorderBufferChange *change,
+ bool streaming)
+{
+ HeapTuple tuple;
+ bool isnull;
+ int64 last_value, log_cnt;
+ bool is_called;
+
+ /* FIXME support streaming */
+ Assert(!streaming);
+
+ tuple = &change->data.sequence.tuple->tuple;
+
+ /*
+ * Extract the internal sequence values, describing the state.
+ *
+ * XXX Seems a bit strange to access it directly. Maybe there's
+ * a better / more correct way?
+ */
+ last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
+ log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
+ is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+
+ rb->sequence(rb, txn, change->lsn, relation, true, /* gotta be transactional */
+ change->data.sequence.created,
+ last_value, log_cnt, is_called);
+}
+
/*
* Function to store the command id and snapshot at the end of the current
* stream so that we can reuse the same while sending the next stream.
@@ -2387,6 +2673,31 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
elog(ERROR, "tuplecid value in changequeue");
break;
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ Assert(snapshot_now);
+
+ reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
+ change->data.sequence.relnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+
+ if (!RelationIsValid(relation))
+ elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
+ reloid,
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ if (RelationIsLogicallyLogged(relation))
+ ReorderBufferApplySequence(rb, txn, relation, change, streaming);
+
+ RelationClose(relation);
+ break;
}
}
@@ -3773,6 +4084,39 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ char *data;
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
+ /* make sure we have enough space */
+ ReorderBufferSerializeReserve(rb, sz);
+
+ data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
+ /* might have been reallocated above */
+ ondisk = (ReorderBufferDiskChange *) rb->outbuf;
+
+ if (len)
+ {
+ memcpy(data, &tup->tuple, sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ memcpy(data, tup->tuple.t_data, len);
+ data += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4037,6 +4381,22 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4338,6 +4698,30 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ uint32 tuplelen = ((HeapTuple) data)->t_len;
+
+ change->data.sequence.tuple =
+ ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
+
+ /* restore ->tuple */
+ memcpy(&change->data.sequence.tuple->tuple, data,
+ sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ /* reset t_data pointer into the new tuplebuf */
+ change->data.sequence.tuple->tuple.t_data =
+ ReorderBufferTupleBufData(change->data.sequence.tuple);
+
+ /* restore tuple data itself */
+ memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
+ data += tuplelen;
+ }
+ break;
+
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index cc50eb875b..d6f68e3079 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -99,6 +99,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -353,6 +354,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -874,6 +881,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ elog(WARNING, "sequence %s info last_value %ld log_cnt %ld is_called %d",
+ quote_qualified_identifier(lrel.nspname, lrel.relname),
+ last_value, log_cnt, is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1108,10 +1208,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index bbb659dad0..e67bcce0a8 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -70,6 +70,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -823,6 +824,57 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ elog(WARNING, "applying sequence transactional %d created %d last_value %ld log_cnt %ld is_called %d",
+ seq.transactional, seq.created, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2067,6 +2119,10 @@ apply_dispatch(StringInfo s)
*/
return;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
return;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 63f108f960..c5fc6b3793 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -49,6 +49,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_stream_start(struct LogicalDecodingContext *ctx,
@@ -144,6 +148,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->filter_by_origin_cb = pgoutput_origin_filter;
cb->shutdown_cb = pgoutput_shutdown;
@@ -166,11 +171,13 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
data->binary = false;
data->streaming = false;
data->messages = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -236,6 +243,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -756,6 +773,47 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(rel))
+ return;
+
+ /* XXX handle (in_streaming) here */
+
+ relentry = get_rel_sync_entry(data, RelationGetRelid(rel));
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ rel,
+ xid,
+ sequence_lsn,
+ transactional,
+ created,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1029,7 +1087,8 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->streamed_txns = NIL;
entry->replicate_valid = false;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->publish_as_relid = InvalidOid;
entry->map = NULL; /* will be set by maybe_send_schema() if needed */
}
@@ -1040,6 +1099,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
List *pubids = GetRelationPublications(relid);
ListCell *lc;
Oid publish_as_relid = relid;
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1063,12 +1123,23 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1119,10 +1190,12 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
}
if (entry->pubactions.pubinsert && entry->pubactions.pubupdate &&
- entry->pubactions.pubdelete && entry->pubactions.pubtruncate)
+ entry->pubactions.pubdelete && entry->pubactions.pubtruncate &&
+ entry->pubactions.pubsequence)
break;
}
@@ -1267,5 +1340,6 @@ rel_sync_cache_publication_cb(Datum arg, int cacheid, uint32 hashvalue)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
}
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index d55ae016d0..4841954671 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5497,6 +5497,7 @@ GetRelationPublicationActions(Relation relation)
pubactions->pubupdate |= pubform->pubupdate;
pubactions->pubdelete |= pubform->pubdelete;
pubactions->pubtruncate |= pubform->pubtruncate;
+ pubactions->pubsequence |= pubform->pubsequence;
ReleaseSysCache(tup);
@@ -5505,7 +5506,8 @@ GetRelationPublicationActions(Relation relation)
* other publications.
*/
if (pubactions->pubinsert && pubactions->pubupdate &&
- pubactions->pubdelete && pubactions->pubtruncate)
+ pubactions->pubdelete && pubactions->pubtruncate &&
+ pubactions->pubsequence)
break;
}
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 38af5682f2..54fc91f26e 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1640,7 +1640,7 @@ psql_completion(const char *text, int start, int end)
/* ALTER PUBLICATION <name> */
else if (Matches("ALTER", "PUBLICATION", MatchAny))
- COMPLETE_WITH("ADD TABLE", "DROP TABLE", "OWNER TO", "RENAME TO", "SET");
+ COMPLETE_WITH("ADD TABLE", "DROP TABLE", "ADD SEQUENCE", "DROP SEQUENCE", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
COMPLETE_WITH("(", "TABLE");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index fde251fa4f..2ff062e0e2 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11437,6 +11437,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 1b31fee9e3..d1c9a18b05 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -74,6 +83,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct Publication
@@ -81,6 +91,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -109,6 +120,9 @@ extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
extern List *GetAllTablesPublications(void);
extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
+extern List *GetPublicationSequenceRelations(Oid pubid);
+
extern bool is_publishable_relation(Relation rel);
extern ObjectAddress publication_add_relation(Oid pubid, Relation targetrel,
bool if_not_exists);
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 40544dd4c7..c28e8695cb 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -48,6 +48,7 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
typedef struct xl_seq_rec
{
RelFileNode node;
+ bool created; /* is this a CREATE SEQUENCE */
/* SEQUENCE TUPLE DATA FOLLOWS AT THE END */
} xl_seq_rec;
@@ -59,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index def9651b34..aa602deae5 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3643,8 +3643,10 @@ typedef struct AlterPublicationStmt
/* parameters used for ALTER PUBLICATION ... ADD/DROP TABLE */
List *tables; /* List of tables to add/drop */
+ List *sequences; /* List of sequences to add/drop */
bool for_all_tables; /* Special publication for all tables in db */
- DefElemAction tableAction; /* What action to perform with the tables */
+ bool for_all_sequences; /* Special publication for all tables in db */
+ DefElemAction action; /* What action to perform with the tables/sequences */
} AlterPublicationStmt;
typedef struct CreateSubscriptionStmt
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 55b90c03ea..f3a7d18fca 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -55,6 +55,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_STREAM_START = 'S',
LOGICAL_REP_MSG_STREAM_END = 'E',
LOGICAL_REP_MSG_STREAM_COMMIT = 'c',
@@ -107,6 +108,19 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ bool created;
+ int64 last_value;
+ int64 log_cnt; /* XXX probably not needed? */
+ bool is_called; /* XXX probably not needed? */
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -154,6 +168,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/output_plugin.h b/src/include/replication/output_plugin.h
index 810495ed0e..e3e1b03f32 100644
--- a/src/include/replication/output_plugin.h
+++ b/src/include/replication/output_plugin.h
@@ -88,6 +88,19 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
Size message_size,
const char *message);
+/*
+ * Called for the generic logical decoding sequences.
+ */
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ bool created,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Filter changes by origin.
*/
@@ -219,6 +232,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index 51e7c0348d..a5aec2928a 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -27,6 +27,7 @@ typedef struct PGOutputData
bool binary;
bool streaming;
bool messages;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index ba257d81b5..03fc68a25f 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -64,7 +64,8 @@ enum ReorderBufferChangeType
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT,
- REORDER_BUFFER_CHANGE_TRUNCATE
+ REORDER_BUFFER_CHANGE_TRUNCATE,
+ REORDER_BUFFER_CHANGE_SEQUENCE
};
/* forward declaration */
@@ -158,6 +159,14 @@ typedef struct ReorderBufferChange
uint32 ninvalidations; /* Number of messages */
SharedInvalidationMessage *invalidations; /* invalidation message */
} inval;
+
+ /* Context data for Sequence changes */
+ struct
+ {
+ RelFileNode relnode;
+ bool created;
+ ReorderBufferTupleBuf *tuple;
+ } sequence;
} data;
/*
@@ -426,6 +435,15 @@ typedef void (*ReorderBufferMessageCB) (ReorderBuffer *rb,
const char *prefix, Size sz,
const char *message);
+/* sequence callback signature */
+typedef void (*ReorderBufferSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* begin prepare callback signature */
typedef void (*ReorderBufferBeginPrepareCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn);
@@ -507,6 +525,12 @@ struct ReorderBuffer
*/
HTAB *by_txn;
+ /*
+ * relfilenode => XID lookup table for sequences created in a transaction
+ * (also includes altered sequences, which assigns new relfilenode)
+ */
+ HTAB *sequences;
+
/*
* Transactions that could be a toplevel xact, ordered by LSN of the first
* record bearing that xid.
@@ -537,6 +561,7 @@ struct ReorderBuffer
ReorderBufferApplyTruncateCB apply_truncate;
ReorderBufferCommitCB commit;
ReorderBufferMessageCB message;
+ ReorderBufferSequenceCB sequence;
/*
* Callbacks to be called when streaming a transaction at prepare time.
@@ -631,6 +656,10 @@ void ReorderBufferQueueChange(ReorderBuffer *, TransactionId,
void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapshot, XLogRecPtr lsn,
bool transactional, const char *prefix,
Size message_size, const char *message);
+void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
@@ -678,4 +707,7 @@ void ReorderBufferSetRestartPoint(ReorderBuffer *, XLogRecPtr ptr);
void StartupReorderBuffer(void);
+bool ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created);
+
#endif
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 63d6ab7a4e..2cbad49f2e 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -158,8 +158,8 @@ DROP TABLE testpub_parted1;
DROP PUBLICATION testpub_forparted, testpub_forparted1;
-- fail - view
CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_view;
-ERROR: "testpub_view" is not a table
-DETAIL: Only tables can be added to publications.
+ERROR: "testpub_view" is not a table or sequence
+DETAIL: Only tables and sequences can be added to publications.
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1, pub_test.testpub_nopk;
RESET client_min_messages;
@@ -180,8 +180,8 @@ Tables:
-- fail - view
ALTER PUBLICATION testpub_default ADD TABLE testpub_view;
-ERROR: "testpub_view" is not a table
-DETAIL: Only tables can be added to publications.
+ERROR: "testpub_view" is not a table or sequence
+DETAIL: Only tables and sequences can be added to publications.
ALTER PUBLICATION testpub_default ADD TABLE testpub_tbl1;
ALTER PUBLICATION testpub_default SET TABLE testpub_tbl1;
ALTER PUBLICATION testpub_default ADD TABLE pub_test.testpub_nopk;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index e5ab11275d..393be871be 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1451,6 +1451,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gpt(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gpt.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
--
2.31.1
On Wed, Jun 23, 2021 at 7:55 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
On 6/23/21 4:14 PM, Tomas Vondra wrote:
A rebased patch, addressing a minor bitrot due to 4daa140a2f5.
Meh, forgot to attach the patch as usual, of course ...
The patch does not apply on Head anymore, could you rebase and post a
patch. I'm changing the status to "Waiting for Author".
Regards,
Vignesh
On 08.06.21 00:28, Tomas Vondra wrote:
new "sequence" publication action
---------------------------------The publications now have a new "sequence" publication action, which is
enabled by default. This determines whether the publication decodes
sequences or what.FOR ALL SEQUENCES
-----------------It should be possible to create FOR ALL SEQUENCES publications, just
like we have FOR ALL TABLES. But this produces shift/reduce conflicts
in the grammar, and I didn't bother dealing with that. So for now it's
required to do ALTER PUBLICATION ... [ADD | DROP] SEQUENCE ...
I have been thinking about these DDL-level issues a bit. The most
common use case will be to have a bunch of tables with implicit
sequences, and you just want to replicate them from here to there
without too much effort. So ideally an implicit sequence should be
replicated by default if the table is part of a publication (unless
sequences are turned off by the publication option).
We already have support for things like that in
GetPublicationRelations(), where a partitioned table is expanded to
include the actual partitions. I think that logic could be reused. So
in general I would have GetPublicationRelations() include sequences and
don't have GetPublicationSequenceRelations() at all. Then sequences
could also be sent by pg_publication_tables(), maybe add a relkind
column. And then you also don't need so much duplicate DDL code, if you
just consider everything as a relation. For example, there doesn't seem
to be an actual need to have fetch_sequence_list() and subsequent
processing on the subscriber side. It does the same thing as
fetch_table_list(), so it might as well just all be one thing.
We do, however, probably need some checking that we don't replicate
tables to sequences or vice versa.
We probably also don't need a separate FOR ALL SEQUENCES option. What
users really want is a "for everything" option. We could think about
renaming or alternative syntax, but in principle I think FOR ALL TABLES
should include sequences by default.
Tests under src/test/subscription/ are needed.
I'm not sure why test_decoding needs a skip-sequences option. The
source code says it's for backward compatibility, but I don't see why we
need that.
On 7/20/21 5:30 PM, Peter Eisentraut wrote:
On 08.06.21 00:28, Tomas Vondra wrote:
new "sequence" publication action
---------------------------------The publications now have a new "sequence" publication action, which is
enabled by default. This determines whether the publication decodes
sequences or what.FOR ALL SEQUENCES
-----------------It should be possible to create FOR ALL SEQUENCES publications, just
like we have FOR ALL TABLES. But this produces shift/reduce conflicts
in the grammar, and I didn't bother dealing with that. So for now it's
required to do ALTER PUBLICATION ... [ADD | DROP] SEQUENCE ...I have been thinking about these DDL-level issues a bit. The most
common use case will be to have a bunch of tables with implicit
sequences, and you just want to replicate them from here to there
without too much effort. So ideally an implicit sequence should be
replicated by default if the table is part of a publication (unless
sequences are turned off by the publication option).
Agreed, that seems like a reasonable approach.
We already have support for things like that in
GetPublicationRelations(), where a partitioned table is expanded to
include the actual partitions. I think that logic could be reused. So
in general I would have GetPublicationRelations() include sequences and
don't have GetPublicationSequenceRelations() at all. Then sequences
could also be sent by pg_publication_tables(), maybe add a relkind
column. And then you also don't need so much duplicate DDL code, if you
just consider everything as a relation. For example, there doesn't seem
to be an actual need to have fetch_sequence_list() and subsequent
processing on the subscriber side. It does the same thing as
fetch_table_list(), so it might as well just all be one thing.
Not sure. I agree with replicating implicit sequences by default,
without having to add them to the publication. But I think we should
allow adding other sequences too, and I think some of this code and
differentiation from tables will be needed.
FWIW I'm not claiming there are no duplicate parts - I've mostly
copy-pasted the table-handling code for sequences, and I'll look into
reusing some of it.
We do, however, probably need some checking that we don't replicate
tables to sequences or vice versa.
True. I haven't tried doing such silly things yet.
We probably also don't need a separate FOR ALL SEQUENCES option. What
users really want is a "for everything" option. We could think about
renaming or alternative syntax, but in principle I think FOR ALL TABLES
should include sequences by default.Tests under src/test/subscription/ are needed.
Yeah, true. At the moment there are just tests in test_decoding, mostly
because the previous patch versions did not add support for sequences to
built-in replication. Will fix.
I'm not sure why test_decoding needs a skip-sequences option. The
source code says it's for backward compatibility, but I don't see why we
need that.
Hmmm, I'm a bit baffled by skip-sequences true. I think Cary added it to
limit chances in test_decoding tests, while the misleading comment about
backwards compatibility comes from me.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Here's a rebased version of the patch, no other changes.
I think the crucial aspect of this that needs discussion/feedback the
most is the transactional vs. non-transactional behavior. All the other
questions are less important / cosmetic.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Logical-decoding-replication-of-sequences-20210720.patchtext/x-patch; charset=UTF-8; name=0001-Logical-decoding-replication-of-sequences-20210720.patchDownload
From f61f2cabf6221451af8f427a5d4b4f7f4583e618 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas@2ndquadrant.com>
Date: Tue, 20 Jul 2021 23:20:10 +0200
Subject: [PATCH] Logical decoding / replication of sequences
---
contrib/test_decoding/Makefile | 3 +-
contrib/test_decoding/expected/sequence.out | 327 +++++++++++++++
contrib/test_decoding/sql/sequence.sql | 119 ++++++
contrib/test_decoding/test_decoding.c | 40 ++
doc/src/sgml/catalogs.sgml | 71 ++++
doc/src/sgml/ref/alter_publication.sgml | 28 +-
doc/src/sgml/ref/alter_subscription.sgml | 6 +-
src/backend/catalog/pg_publication.c | 160 +++++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 232 ++++++++++-
src/backend/commands/sequence.c | 132 +++++-
src/backend/commands/subscriptioncmds.c | 272 +++++++++++++
src/backend/executor/execReplication.c | 4 +-
src/backend/nodes/copyfuncs.c | 4 +-
src/backend/nodes/equalfuncs.c | 4 +-
src/backend/parser/gram.y | 44 +-
src/backend/replication/logical/decode.c | 131 +++++-
src/backend/replication/logical/logical.c | 42 ++
src/backend/replication/logical/proto.c | 52 +++
.../replication/logical/reorderbuffer.c | 384 ++++++++++++++++++
src/backend/replication/logical/tablesync.c | 118 +++++-
src/backend/replication/logical/worker.c | 56 +++
src/backend/replication/pgoutput/pgoutput.c | 80 +++-
src/backend/utils/cache/relcache.c | 4 +-
src/bin/psql/tab-complete.c | 2 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 14 +
src/include/commands/sequence.h | 2 +
src/include/nodes/parsenodes.h | 4 +-
src/include/replication/logicalproto.h | 20 +
src/include/replication/output_plugin.h | 14 +
src/include/replication/pgoutput.h | 1 +
src/include/replication/reorderbuffer.h | 34 +-
src/test/regress/expected/publication.out | 8 +-
src/test/regress/expected/rules.out | 8 +
35 files changed, 2389 insertions(+), 46 deletions(-)
create mode 100644 contrib/test_decoding/expected/sequence.out
create mode 100644 contrib/test_decoding/sql/sequence.sql
diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index 9a31e0b879..56ddc3abae 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -5,7 +5,8 @@ PGFILEDESC = "test_decoding - example of a logical decoding output plugin"
REGRESS = ddl xact rewrite toast permissions decoding_in_xact \
decoding_into_rel binary prepared replorigin time messages \
- spill slot truncate stream stats twophase twophase_stream
+ spill slot truncate stream stats twophase twophase_stream \
+ sequence
ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \
oldest_xmin snapshot_transfer subxact_without_top concurrent_stream \
twophase_snapshot
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
new file mode 100644
index 0000000000..07f3e3e92c
--- /dev/null
+++ b/contrib/test_decoding/expected/sequence.out
@@ -0,0 +1,327 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+ ?column?
+----------
+ init
+(1 row)
+
+CREATE SEQUENCE test_sequence;
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4
+(1 row)
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 14
+(1 row)
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4000
+(1 row)
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, true);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, false);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3500
+(1 row)
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ COMMIT
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 4, log_cnt: 0 is_called: 1
+ COMMIT
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 334, log_cnt: 0 is_called: 1
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 14, log_cnt: 32 is_called: 1
+ COMMIT
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 14, log_cnt: 0 is_called: 1
+ COMMIT
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 4000, log_cnt: 0 is_called: 0
+ COMMIT
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 4320, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3500, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3830, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3500, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3830, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3500, log_cnt: 0 is_called: 0
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3820, log_cnt: 0 is_called: 1
+(24 rows)
+
+DROP SEQUENCE test_sequence;
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3002
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3003
+(1 row)
+
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 6001
+(1 row)
+
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3003 | 30 | t
+(1 row)
+
+SELECT nextval('test_table_a_seq');
+ nextval
+---------
+ 3004
+(1 row)
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+---------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_table_a_seq transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ COMMIT
+ sequence: public.test_table_a_seq transactional: 0 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ sequence: public.test_table_a_seq transactional: 0 created: 0 last_value: 3000, log_cnt: 0 is_called: 1
+ sequence: public.test_table_a_seq transactional: 0 created: 0 last_value: 3033, log_cnt: 0 is_called: 1
+ BEGIN
+ COMMIT
+(8 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+-------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_table_a_seq transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ sequence: public.test_table_a_seq transactional: 1 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ COMMIT
+(6 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ setval
+--------
+ 5000
+(1 row)
+
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3001 | 32 | t
+(1 row)
+
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ sequence: public.test_sequence transactional: 1 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 1 created: 0 last_value: 3000, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 1 created: 0 last_value: 3033, log_cnt: 0 is_called: 1
+ COMMIT
+(6 rows)
+
+SELECT pg_drop_replication_slot('regression_slot');
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
new file mode 100644
index 0000000000..d8a34738f3
--- /dev/null
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -0,0 +1,119 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+
+CREATE SEQUENCE test_sequence;
+
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, true);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, false);
+SELECT nextval('test_sequence');
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+DROP SEQUENCE test_sequence;
+
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+SELECT nextval('test_table_a_seq');
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index e5cd84e85e..54d734cb85 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -35,6 +35,7 @@ typedef struct
bool include_timestamp;
bool skip_empty_xacts;
bool only_local;
+ bool skip_sequences;
} TestDecodingData;
/*
@@ -76,6 +77,10 @@ static void pg_decode_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
TransactionId xid,
const char *gid);
@@ -141,6 +146,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->filter_by_origin_cb = pg_decode_filter;
cb->shutdown_cb = pg_decode_shutdown;
cb->message_cb = pg_decode_message;
+ cb->sequence_cb = pg_decode_sequence;
cb->filter_prepare_cb = pg_decode_filter_prepare;
cb->begin_prepare_cb = pg_decode_begin_prepare_txn;
cb->prepare_cb = pg_decode_prepare_txn;
@@ -175,6 +181,9 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
data->skip_empty_xacts = false;
data->only_local = false;
+ /* skip sequences by default for backwards compatibility */
+ data->skip_sequences = true;
+
ctx->output_plugin_private = data;
opt->output_type = OUTPUT_PLUGIN_TEXTUAL_OUTPUT;
@@ -265,6 +274,17 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "skip-sequences") == 0)
+ {
+
+ if (elem->arg == NULL)
+ continue; /* true by default */
+ else if (!parse_bool(strVal(elem->arg), &data->skip_sequences))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not parse value \"%s\" for parameter \"%s\"",
+ strVal(elem->arg), elem->defname)));
+ }
else
{
ereport(ERROR,
@@ -744,6 +764,26 @@ pg_decode_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+ char *nspname = get_namespace_name(RelationGetNamespace(rel));
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfo(ctx->out, "sequence: %s.%s transactional: %d created: %d last_value: %zu, log_cnt: %zu is_called: %d",
+ nspname, RelationGetRelationName(rel),
+ transactional, created, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
static void
pg_decode_stream_start(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn)
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 2b2c70a26e..a342a54629 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -9430,6 +9430,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11264,6 +11269,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index faa114b2c6..c68a1573de 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -24,6 +24,9 @@ PostgreSQL documentation
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> ADD TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> DROP TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> ADD SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> DROP SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> OWNER TO { <replaceable>new_owner</replaceable> | CURRENT_ROLE | CURRENT_USER | SESSION_USER }
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <replaceable>new_name</replaceable>
@@ -50,7 +53,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -62,7 +76,8 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<para>
You must own the publication to use <command>ALTER PUBLICATION</command>.
- Adding a table to a publication additionally requires owning that table.
+ Adding a table to a publication additionally requires owning that table,
+ and the same requirement applies to sequences.
To alter the owner, you must also be a direct or indirect member of the new
owning role. The new owner must have <literal>CREATE</literal> privilege on
the database. Also, the new owner of a <literal>FOR ALL TABLES</literal>
@@ -97,6 +112,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index a6f994450d..ff1dcd3bfd 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -144,8 +144,8 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<term><literal>REFRESH PUBLICATION</literal></term>
<listitem>
<para>
- Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ Fetch missing table and sequence information from publisher. This will start
+ replication of tables and sequences that were added to the subscribed-to publications
since the last invocation of <command>REFRESH PUBLICATION</command> or
since <command>CREATE SUBSCRIPTION</command>.
</para>
@@ -162,7 +162,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
Specifies whether the existing data in the publications that are
being subscribed to should be copied once the replication starts.
The default is <literal>true</literal>. (Previously subscribed
- tables are not copied.)
+ tables and sequences are not copied.)
</para>
</listitem>
</varlistentry>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 36bfff9706..79f45d892e 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -51,26 +51,27 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("\"%s\" is not a table",
+ errmsg("\"%s\" is not a table or sequence",
RelationGetRelationName(targetrel)),
- errdetail("Only tables can be added to publications.")));
+ errdetail("Only tables and sequences can be added to publications.")));
- /* Can't be system table */
+ /* Can't be system table/sequence */
if (IsCatalogRelation(targetrel))
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("\"%s\" is a system table",
+ errmsg("\"%s\" is a system table or sequence",
RelationGetRelationName(targetrel)),
- errdetail("System tables cannot be added to publications.")));
+ errdetail("System tables / sequences cannot be added to publications.")));
/* UNLOGGED and TEMP relations cannot be part of publication. */
if (!RelationIsPermanent(targetrel))
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("table \"%s\" cannot be replicated",
+ errmsg("table or sequence \"%s\" cannot be replicated",
RelationGetRelationName(targetrel)),
errdetail("Temporary and unlogged relations cannot be replicated.")));
}
@@ -98,7 +99,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -271,6 +273,10 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+ /* skip sequences here */
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ continue;
+
if (get_rel_relkind(pubrel->prrelid) == RELKIND_PARTITIONED_TABLE &&
pub_partopt != PUBLICATION_PART_ROOT)
{
@@ -304,6 +310,49 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of relation oids for a publication (sequences only).
+ *
+ * This should only be used for normal publications, the FOR ALL TABLES
+ * should use GetAllSequencesPublicationRelations().
+ */
+List *
+GetPublicationSequenceRelations(Oid pubid)
+{
+ List *result;
+ Relation pubrelsrel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications associated with the relation. */
+ pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_rel_prpubid,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(pubid));
+
+ scan = systable_beginscan(pubrelsrel, PublicationRelPrrelidPrpubidIndexId,
+ true, NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Form_pg_publication_rel pubrel;
+
+ pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ }
+
+ systable_endscan(scan);
+ table_close(pubrelsrel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of publication oids for publications marked as FOR ALL TABLES.
*/
@@ -404,6 +453,46 @@ GetAllTablesPublicationRelations(bool pubviaroot)
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -426,10 +515,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -555,3 +646,56 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ sequences = GetPublicationSequenceRelations(publication->oid);
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 55f6e3711d..9bec49bc3e 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -372,6 +372,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPT.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 8487eeb7e6..90ad83d4a1 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -54,6 +55,12 @@ static void PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt);
static void PublicationDropTables(Oid pubid, List *rels, bool missing_ok);
+static List *OpenSequenceList(List *sequences);
+static void CloseSequenceList(List *rels);
+static void PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt);
+static void PublicationDropSequences(Oid pubid, List *rels, bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
List *options,
@@ -72,6 +79,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -96,6 +104,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -118,6 +127,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -208,6 +219,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -373,9 +386,9 @@ AlterPublicationTables(AlterPublicationStmt *stmt, Relation rel,
rels = OpenTableList(stmt->tables);
- if (stmt->tableAction == DEFELEM_ADD)
+ if (stmt->action == DEFELEM_ADD)
PublicationAddTables(pubid, rels, false, stmt);
- else if (stmt->tableAction == DEFELEM_DROP)
+ else if (stmt->action == DEFELEM_DROP)
PublicationDropTables(pubid, rels, false);
else /* DEFELEM_SET */
{
@@ -426,6 +439,82 @@ AlterPublicationTables(AlterPublicationStmt *stmt, Relation rel,
CloseTableList(rels);
}
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, Relation rel,
+ HeapTuple tup)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ Assert(list_length(stmt->sequences) > 0);
+
+ rels = OpenSequenceList(stmt->sequences);
+
+ if (stmt->action == DEFELEM_ADD)
+ PublicationAddSequences(pubid, rels, false, stmt);
+ else if (stmt->action == DEFELEM_DROP)
+ PublicationDropSequences(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ Relation newrel = (Relation) lfirst(newlc);
+
+ if (RelationGetRelid(newrel) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+
+ if (!found)
+ {
+ Relation oldrel = relation_open(oldrelid,
+ ShareUpdateExclusiveLock);
+
+ delrels = lappend(delrels, oldrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropSequences(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddSequences(pubid, rels, true, stmt);
+
+ CloseSequenceList(delrels);
+ }
+
+ CloseSequenceList(rels);
+}
+
/*
* Alter the existing publication.
*
@@ -459,8 +548,12 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
if (stmt->options)
AlterPublicationOptions(pstate, stmt, rel, tup);
- else
+ else if (stmt->tables)
AlterPublicationTables(stmt, rel, tup);
+ else if (stmt->sequences)
+ AlterPublicationSequences(stmt, rel, tup);
+ else
+ Assert(false);
/* Cleanup. */
heap_freetuple(tup);
@@ -665,6 +758,139 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
}
}
+/*
+ * Open relations specified by a RangeVar list.
+ * The returned tables are locked in ShareUpdateExclusiveLock mode in order to
+ * add them to a publication.
+ */
+static List *
+OpenSequenceList(List *sequences)
+{
+ List *relids = NIL;
+ List *rels = NIL;
+ ListCell *lc;
+
+ /*
+ * Open, share-lock, and check all the explicitly-specified relations
+ */
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = castNode(RangeVar, lfirst(lc));
+ Relation rel;
+ Oid myrelid;
+
+ /* Allow query cancel in case this takes a long time */
+ CHECK_FOR_INTERRUPTS();
+
+ rel = relation_openrv(rv, ShareUpdateExclusiveLock);
+ myrelid = RelationGetRelid(rel);
+
+ /*
+ * Filter out duplicates if user specifies "foo, foo".
+ *
+ * Note that this algorithm is known to not be very efficient (O(N^2))
+ * but given that it only works on list of sequences given to us by user
+ * it's deemed acceptable.
+ */
+ if (list_member_oid(relids, myrelid))
+ {
+ relation_close(rel, ShareUpdateExclusiveLock);
+ continue;
+ }
+
+ rels = lappend(rels, rel);
+ relids = lappend_oid(relids, myrelid);
+ }
+
+ list_free(relids);
+
+ return rels;
+}
+
+/*
+ * Close all relations in the list.
+ */
+static void
+CloseSequenceList(List *rels)
+{
+ ListCell *lc;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+
+ relation_close(rel, NoLock);
+ }
+}
+
+/*
+ * Add listed tables to the publication.
+ */
+static void
+PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt)
+{
+ ListCell *lc;
+
+ Assert(!stmt || !stmt->for_all_sequences);
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ ObjectAddress obj;
+
+ /* Must be owner of the table or superuser. */
+ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind),
+ RelationGetRelationName(rel));
+
+ obj = publication_add_relation(pubid, rel, if_not_exists);
+ if (stmt)
+ {
+ EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
+ (Node *) stmt);
+
+ InvokeObjectPostCreateHook(PublicationRelRelationId,
+ obj.objectId, 0);
+ }
+ }
+}
+
+/*
+ * Remove listed sequences from the publication.
+ */
+static void
+PublicationDropSequences(Oid pubid, List *rels, bool missing_ok)
+{
+ ObjectAddress obj;
+ ListCell *lc;
+ Oid prid;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ Oid relid = RelationGetRelid(rel);
+
+ prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
+ ObjectIdGetDatum(relid),
+ ObjectIdGetDatum(pubid));
+ if (!OidIsValid(prid))
+ {
+ if (missing_ok)
+ continue;
+
+ ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+ errmsg("relation \"%s\" is not part of the publication",
+ RelationGetRelationName(rel))));
+ }
+
+ ObjectAddressSet(obj, PublicationRelRelationId, prid);
+ performDeletion(&obj, DROP_CASCADE, 0);
+ }
+}
+
+
/*
* Internal workhorse for changing a publication owner
*/
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 72bfdc07a4..2fc644c74c 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -94,7 +94,7 @@ static HTAB *seqhashtab = NULL; /* hash table for SeqTable items */
*/
static SeqTableData *last_used_seq = NULL;
-static void fill_seq_with_data(Relation rel, HeapTuple tuple);
+static void fill_seq_with_data(Relation rel, HeapTuple tuple, bool create);
static Relation lock_and_open_sequence(SeqTable seq);
static void create_seq_hashtable(void);
static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
@@ -222,7 +222,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
/* now initialize the sequence's data */
tuple = heap_form_tuple(tupDesc, value, null);
- fill_seq_with_data(rel, tuple);
+ fill_seq_with_data(rel, tuple, true);
/* process OWNED BY if given */
if (owned_by)
@@ -327,7 +327,86 @@ ResetSequence(Oid seq_relid)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seq_rel, tuple);
+ fill_seq_with_data(seq_rel, tuple, true);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple, false);
/* Clear local cache so that we don't think we have cached numbers */
/* Note that we do not change the currval() state */
@@ -340,7 +419,7 @@ ResetSequence(Oid seq_relid)
* Initialize a sequence's relation with the specified tuple as content
*/
static void
-fill_seq_with_data(Relation rel, HeapTuple tuple)
+fill_seq_with_data(Relation rel, HeapTuple tuple, bool create)
{
Buffer buf;
Page page;
@@ -378,8 +457,21 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(rel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX Not sure if this is the best solution.
+ */
+ GetCurrentTransactionId();
+ }
+
START_CRIT_SECTION();
MarkBufferDirty(buf);
@@ -399,6 +491,7 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = rel->rd_node;
+ xlrec.created = create;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) tuple->t_data, tuple->t_len);
@@ -502,7 +595,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seqrel, newdatatuple);
+ fill_seq_with_data(seqrel, newdatatuple, true);
}
/* process OWNED BY if given */
@@ -766,8 +859,21 @@ nextval_internal(Oid relid, bool check_permissions)
* (Have to do that here, so we're outside the critical section)
*/
if (logit && RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX Not sure if this is the best solution.
+ */
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -803,6 +909,7 @@ nextval_internal(Oid relid, bool check_permissions)
seq->log_cnt = 0;
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -977,8 +1084,21 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX Not sure if this is the best solution.
+ */
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -999,6 +1119,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
+
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 22ae982328..2d03cf011e 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -84,6 +84,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -521,6 +522,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -559,6 +561,26 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -731,6 +753,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -822,6 +848,183 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1635,6 +1838,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 1e285e0349..a1c7880be6 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -619,11 +619,11 @@ CheckSubscriptionRelkind(char relkind, const char *nspname,
errdetail("\"%s.%s\" is a foreign table.",
nspname, relname)));
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
nspname, relname),
- errdetail("\"%s.%s\" is not a table.",
+ errdetail("\"%s.%s\" is not a table or a sequence.",
nspname, relname)));
}
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 9d4893c504..b8df89863d 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4834,8 +4834,10 @@ _copyAlterPublicationStmt(const AlterPublicationStmt *from)
COPY_STRING_FIELD(pubname);
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(tables);
+ COPY_NODE_FIELD(sequences);
COPY_SCALAR_FIELD(for_all_tables);
- COPY_SCALAR_FIELD(tableAction);
+ COPY_SCALAR_FIELD(for_all_sequences);
+ COPY_SCALAR_FIELD(action);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index b9cc7b199c..9cce502876 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2308,8 +2308,10 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_STRING_FIELD(pubname);
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(tables);
+ COMPARE_NODE_FIELD(sequences);
COMPARE_SCALAR_FIELD(for_all_tables);
- COMPARE_SCALAR_FIELD(tableAction);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
+ COMPARE_SCALAR_FIELD(action);
return true;
}
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 10da5c5c51..3926a7ee9f 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -434,6 +434,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
%type <node> group_by_item empty_grouping_set rollup_clause cube_clause
%type <node> grouping_sets_clause
%type <node> opt_publication_for_tables publication_for_tables
+/* %type <node> opt_publication_for_sequences publication_for_sequences */
%type <list> opt_fdw_options fdw_options
%type <defelt> fdw_option
@@ -9588,6 +9589,7 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
*****************************************************************************/
CreatePublicationStmt:
+ /* CREATE PUBLICATION name opt_publication_for_tables opt_publication_for_sequences opt_definition */
CREATE PUBLICATION name opt_publication_for_tables opt_definition
{
CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
@@ -9622,6 +9624,12 @@ publication_for_tables:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
@@ -9633,6 +9641,12 @@ publication_for_tables:
*
* ALTER PUBLICATION name SET TABLE table [, table2]
*
+ * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]
+ *
*****************************************************************************/
AlterPublicationStmt:
@@ -9648,7 +9662,7 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_ADD;
+ n->action = DEFELEM_ADD;
$$ = (Node *)n;
}
| ALTER PUBLICATION name SET TABLE relation_expr_list
@@ -9656,7 +9670,7 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_SET;
+ n->action = DEFELEM_SET;
$$ = (Node *)n;
}
| ALTER PUBLICATION name DROP TABLE relation_expr_list
@@ -9664,7 +9678,31 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_DROP;
+ n->action = DEFELEM_DROP;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name ADD_P SEQUENCE relation_expr_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_ADD;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name SET SEQUENCE relation_expr_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_SET;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name DROP SEQUENCE relation_expr_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_DROP;
$$ = (Node *)n;
}
;
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 2874dc0612..98d0edefb0 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -42,6 +42,7 @@
#include "replication/reorderbuffer.h"
#include "replication/snapbuild.h"
#include "storage/standby.h"
+#include "commands/sequence.h"
typedef struct XLogRecordBuffer
{
@@ -74,10 +75,11 @@ static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
bool two_phase);
static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_prepare *parsed);
-
+static void DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
/* common function to decode tuples */
static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);
+static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);
/* helper functions for decoding transactions */
static inline bool FilterPrepare(LogicalDecodingContext *ctx,
@@ -158,6 +160,10 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
DecodeLogicalMsgOp(ctx, &buf);
break;
+ case RM_SEQ_ID:
+ DecodeSequence(ctx, &buf);
+ break;
+
/*
* Rmgrs irrelevant for logical decoding; they describe stuff not
* represented in logical decoding. Add new rmgrs in rmgrlist.h's
@@ -173,7 +179,6 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
case RM_HASH_ID:
case RM_GIN_ID:
case RM_GIST_ID:
- case RM_SEQ_ID:
case RM_SPGIST_ID:
case RM_BRIN_ID:
case RM_COMMIT_TS_ID:
@@ -1312,3 +1317,125 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
(txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
+
+/*
+ * Decode Sequence Tuple
+ */
+static void
+DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
+{
+ int datalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;
+
+ Assert(datalen >= 0);
+
+ tuple->tuple.t_len = datalen + SizeofHeapTupleHeader;;
+
+ ItemPointerSetInvalid(&tuple->tuple.t_self);
+
+ tuple->tuple.t_tableOid = InvalidOid;
+
+ memcpy(((char *) tuple->tuple.t_data),
+ data + sizeof(xl_seq_rec),
+ SizeofHeapTupleHeader);
+
+ memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,
+ data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,
+ datalen);
+}
+
+/*
+ * Handle sequence decode
+ *
+ * Decoding sequences is a bit tricky, because while most sequence actions
+ * are non-transactional (not subject to rollback), some need to be handled
+ * as transactional.
+ *
+ * By default, a sequence increment is non-transactional - we must not queue
+ * it in a transaction as other changes, because the transaction might get
+ * rolled back and we'd discard the increment. The downstream would not be
+ * notified about the increment, which is wrong.
+ *
+ * On the other hand, the sequence may be created in a transaction. In this
+ * case we *should* queue the change as other changes in the transaction,
+ * because we don't want to send the increments for unknown sequence to the
+ * plugin - it might get confused about which sequence it's related to etc.
+ */
+static void
+DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ SnapBuild *builder = ctx->snapshot_builder;
+ ReorderBufferTupleBuf *tuplebuf;
+ RelFileNode target_node;
+ XLogReaderState *r = buf->record;
+ char *tupledata = NULL;
+ Size tuplelen;
+ Size datalen = 0;
+ TransactionId xid = XLogRecGetXid(r);
+ uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
+ xl_seq_rec *xlrec;
+ Snapshot snapshot;
+ RepOriginId origin_id = XLogRecGetOrigin(r);
+ bool transactional;
+
+ /* only decode changes flagged with XLOG_SEQ_LOG */
+ if (info != XLOG_SEQ_LOG)
+ elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
+
+ /*
+ * If we don't have snapshot or we are just fast-forwarding, there is no
+ * point in decoding messages.
+ */
+ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||
+ ctx->fast_forward)
+ return;
+
+ /* only interested in our database */
+ XLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);
+ if (target_node.dbNode != ctx->slot->data.database)
+ return;
+
+ /* output plugin doesn't look for this origin, no need to queue */
+ if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
+ return;
+
+ tupledata = XLogRecGetData(r);
+ datalen = XLogRecGetDataLen(r);
+ tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);
+
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_seq_rec *) XLogRecGetData(r);
+
+ /* XXX how could we have sequence change without data? */
+ if(!datalen || !tupledata)
+ return;
+
+ tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
+ DecodeSeqTuple(tupledata, datalen, tuplebuf);
+
+ /*
+ * Should we handle the sequence increment as transactional or not?
+ *
+ * If the sequence was created in a still-running transaction, treat
+ * it as transactional and queue the increments. Otherwise it needs
+ * to be treated as non-transactional, in which case we send it to
+ * the plugin right away.
+ */
+ transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
+ target_node,
+ xlrec->created);
+
+ /* Skip the change if already processed (per the snapshot). */
+ if (transactional &&
+ !SnapBuildProcessChange(builder, xid, buf->origptr))
+ return;
+ else if (!transactional &&
+ (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
+ SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ return;
+
+ /* Queue the increment (or send immediately if not transactional). */
+ snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
+ ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
+ origin_id, target_node, transactional,
+ xlrec->created, tuplebuf);
+}
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index d61ef4cfad..5f6e02fba3 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -73,6 +73,10 @@ static void truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
static void message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
/* streaming callbacks */
static void stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
@@ -217,6 +221,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->apply_truncate = truncate_cb_wrapper;
ctx->reorder->commit = commit_cb_wrapper;
ctx->reorder->message = message_cb_wrapper;
+ ctx->reorder->sequence = sequence_cb_wrapper;
/*
* To support streaming, we require start/stop/abort/commit/change
@@ -1198,6 +1203,43 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ if (ctx->callbacks.sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ created, last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr first_lsn)
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index a245252529..abe006817e 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -600,6 +600,58 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint8(out, created);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->created = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 7378beb684..f99e45f22d 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -116,6 +116,13 @@ typedef struct ReorderBufferTXNByIdEnt
ReorderBufferTXN *txn;
} ReorderBufferTXNByIdEnt;
+/* entry for hash table we use to track sequences created in running xacts */
+typedef struct ReorderBufferSequenceEnt
+{
+ RelFileNode rnode;
+ TransactionId xid;
+} ReorderBufferSequenceEnt;
+
/* data structures for (relfilenode, ctid) => (cmin, cmax) mapping */
typedef struct ReorderBufferTupleCidKey
{
@@ -338,6 +345,14 @@ ReorderBufferAllocate(void)
buffer->by_txn = hash_create("ReorderBufferByXid", 1000, &hash_ctl,
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+ /* hash table of sequences, mapping relfilenode to XID of transaction */
+ hash_ctl.keysize = sizeof(RelFileNode);
+ hash_ctl.entrysize = sizeof(ReorderBufferSequenceEnt);
+ hash_ctl.hcxt = buffer->context;
+
+ buffer->sequences = hash_create("ReorderBufferSequenceHash", 1000, &hash_ctl,
+ HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+
buffer->by_txn_last_xid = InvalidTransactionId;
buffer->by_txn_last_txn = NULL;
@@ -528,6 +543,13 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
break;
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
+ change->data.sequence.tuple = NULL;
+ }
+ break;
}
pfree(change);
@@ -856,6 +878,212 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
}
}
+/*
+ * Treat the sequence increment as transactional?
+ */
+bool
+ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created)
+{
+ bool found = false;
+
+ if (created)
+ return true;
+
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND,
+ &found);
+
+ return found;
+}
+
+
+/*
+ * A transactional sequence increment is queued to be processed upon commit
+ * and a non-transactional increment gets processed immediately.
+ *
+ * A sequence update may be both transactional and non-transactional. When
+ * created in a running transaction, treat it as transactional and queue
+ * the change in it. Otherwise treat it as non-transactional, so that we
+ * don't forget the increment in case of a rollback.
+ */
+void
+ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf)
+{
+ /*
+ * Change needs to be handled as transactional, because the sequence was
+ * created in a transaction that is still running. In that case all the
+ * changes need to be queued in that transaction, we must not send them
+ * to the downstream until the transaction commits.
+ *
+ * There's a bit of a trouble with subtransactions - we can't queue it
+ * into the subxact, because it might be rolled back and we'd lose the
+ * increment. We need to queue it into the same (sub)xact that created
+ * the sequence, which is why we track the XID in the hash table.
+ */
+ if (transactional)
+ {
+ MemoryContext oldcontext;
+ ReorderBufferChange *change;
+
+ /* lookup sequence by relfilenode */
+ ReorderBufferSequenceEnt *ent;
+ bool found;
+
+ /* transactional changes require a transaction */
+ Assert(xid != InvalidTransactionId);
+
+ /* search the lookup table (we ignore the return value, found is enough) */
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ created ? HASH_ENTER : HASH_FIND,
+ &found);
+
+ /*
+ * If this is the "create" increment, we must not have found any
+ * pre-existing entry in the hash table (i.e. there must not be
+ * any conflicting sequence).
+ */
+ Assert(!(created && found));
+
+ /* But we must have either created or found an existing entry. */
+ Assert(created || found);
+
+ /*
+ * When creating the sequence, remember the XID of the transaction
+ * that created id.
+ */
+ if (created)
+ ent->xid = xid;
+
+ /* XXX Maybe check that we're still in the same top-level xact? */
+
+ /* OK, allocate and queue the change */
+ oldcontext = MemoryContextSwitchTo(rb->context);
+
+ change = ReorderBufferGetChange(rb);
+
+ change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
+ change->origin_id = origin_id;
+
+ memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
+
+ change->data.sequence.created = created;
+ change->data.sequence.tuple = tuplebuf;
+
+ /* add it to the same subxact that created the sequence */
+ ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+ else
+ {
+ /*
+ * This increment is for a sequence that was not created in any
+ * running transaction, so we treat it as non-transactional and
+ * just send it to the output plugin directly.
+ */
+ ReorderBufferTXN *txn = NULL;
+ volatile Snapshot snapshot_now = snapshot;
+ bool using_subtxn;
+
+#ifdef USE_ASSERT_CHECKING
+ /* All "creates" have to be handled as transactional. */
+ Assert(!created);
+
+ /* Make sure the sequence is not in the hash table. */
+ {
+ bool found;
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND, &found);
+ Assert(!found);
+ }
+#endif
+
+ if (xid != InvalidTransactionId)
+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
+
+ /* setup snapshot to allow catalog access */
+ SetupHistoricSnapshot(snapshot_now, NULL);
+
+ /*
+ * Decoding needs access to syscaches et al., which in turn use
+ * heavyweight locks and such. Thus we need to have enough state around to
+ * keep track of those. The easiest way is to simply use a transaction
+ * internally. That also allows us to easily enforce that nothing writes
+ * to the database by checking for xid assignments.
+ *
+ * When we're called via the SQL SRF there's already a transaction
+ * started, so start an explicit subtransaction there.
+ */
+ using_subtxn = IsTransactionOrTransactionBlock();
+
+ PG_TRY();
+ {
+ Relation relation;
+ HeapTuple tuple;
+ bool isnull;
+ int64 last_value, log_cnt;
+ bool is_called;
+ Oid reloid;
+
+ if (using_subtxn)
+ BeginInternalSubTransaction("sequence");
+ else
+ StartTransactionCommand();
+
+ reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(rnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+ tuple = &tuplebuf->tuple;
+
+ /*
+ * Extract the internal sequence values, describing the state.
+ *
+ * XXX Seems a bit strange to access it directly. Maybe there's
+ * a better / more correct way?
+ */
+ last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
+ log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
+ is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+
+ rb->sequence(rb, txn, lsn, relation, transactional, created,
+ last_value, log_cnt, is_called);
+
+ RelationClose(relation);
+
+ TeardownHistoricSnapshot(false);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+ }
+ PG_CATCH();
+ {
+ TeardownHistoricSnapshot(true);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+ }
+}
+
/*
* AssertTXNLsnOrder
* Verify LSN ordering of transaction lists in the reorderbuffer
@@ -1532,6 +1760,31 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
&found);
Assert(found);
+ /*
+ * Remove sequences created in this transaction (if any).
+ *
+ * There's no way to search by XID, so we simply do a seqscan of all
+ * the entries in the hash table. Hopefully there are only a couple
+ * entries in most cases - people generally don't create many new
+ * sequences over and over.
+ */
+ {
+ HASH_SEQ_STATUS scan_status;
+ ReorderBufferSequenceEnt *ent;
+
+ hash_seq_init(&scan_status, rb->sequences);
+ while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
+ {
+ /* skip sequences not from this transaction */
+ if (ent->xid != txn->xid)
+ continue;
+
+ (void) hash_search(rb->sequences,
+ (void *) &(ent->rnode),
+ HASH_REMOVE, NULL);
+ }
+ }
+
/* remove entries spilled to disk */
if (rbtxn_is_serialized(txn))
ReorderBufferRestoreCleanup(rb, txn);
@@ -1947,6 +2200,39 @@ ReorderBufferApplyMessage(ReorderBuffer *rb, ReorderBufferTXN *txn,
change->data.msg.message);
}
+/*
+ * Helper function for ReorderBufferProcessTXN for applying sequences.
+ */
+static inline void
+ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ Relation relation, ReorderBufferChange *change,
+ bool streaming)
+{
+ HeapTuple tuple;
+ bool isnull;
+ int64 last_value, log_cnt;
+ bool is_called;
+
+ /* FIXME support streaming */
+ Assert(!streaming);
+
+ tuple = &change->data.sequence.tuple->tuple;
+
+ /*
+ * Extract the internal sequence values, describing the state.
+ *
+ * XXX Seems a bit strange to access it directly. Maybe there's
+ * a better / more correct way?
+ */
+ last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
+ log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
+ is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+
+ rb->sequence(rb, txn, change->lsn, relation, true, /* gotta be transactional */
+ change->data.sequence.created,
+ last_value, log_cnt, is_called);
+}
+
/*
* Function to store the command id and snapshot at the end of the current
* stream so that we can reuse the same while sending the next stream.
@@ -2389,6 +2675,31 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
elog(ERROR, "tuplecid value in changequeue");
break;
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ Assert(snapshot_now);
+
+ reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
+ change->data.sequence.relnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+
+ if (!RelationIsValid(relation))
+ elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
+ reloid,
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ if (RelationIsLogicallyLogged(relation))
+ ReorderBufferApplySequence(rb, txn, relation, change, streaming);
+
+ RelationClose(relation);
+ break;
}
}
@@ -3776,6 +4087,39 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ char *data;
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
+ /* make sure we have enough space */
+ ReorderBufferSerializeReserve(rb, sz);
+
+ data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
+ /* might have been reallocated above */
+ ondisk = (ReorderBufferDiskChange *) rb->outbuf;
+
+ if (len)
+ {
+ memcpy(data, &tup->tuple, sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ memcpy(data, tup->tuple.t_data, len);
+ data += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4040,6 +4384,22 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4341,6 +4701,30 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ uint32 tuplelen = ((HeapTuple) data)->t_len;
+
+ change->data.sequence.tuple =
+ ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
+
+ /* restore ->tuple */
+ memcpy(&change->data.sequence.tuple->tuple, data,
+ sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ /* reset t_data pointer into the new tuplebuf */
+ change->data.sequence.tuple->tuple.t_data =
+ ReorderBufferTupleBufData(change->data.sequence.tuple);
+
+ /* restore tuple data itself */
+ memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
+ data += tuplelen;
+ }
+ break;
+
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index f07983a43c..24369a1522 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -357,6 +358,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -871,6 +878,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ elog(WARNING, "sequence %s info last_value %ld log_cnt %ld is_called %d",
+ quote_qualified_identifier(lrel.nspname, lrel.relname),
+ last_value, log_cnt, is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1106,10 +1206,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index b9a7a7ffbb..1776b2d5f8 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -144,6 +144,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -1060,6 +1061,57 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ elog(WARNING, "applying sequence transactional %d created %d last_value %ld log_cnt %ld is_called %d",
+ seq.transactional, seq.created, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2302,6 +2354,10 @@ apply_dispatch(StringInfo s)
*/
return;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
return;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index e4314af13a..4366c275dc 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -49,6 +49,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -157,6 +161,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -186,6 +191,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -193,6 +199,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -258,6 +265,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -865,6 +882,47 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(rel))
+ return;
+
+ /* XXX handle (in_streaming) here */
+
+ relentry = get_rel_sync_entry(data, RelationGetRelid(rel));
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ rel,
+ xid,
+ sequence_lsn,
+ transactional,
+ created,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1128,7 +1186,8 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->streamed_txns = NIL;
entry->replicate_valid = false;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->publish_as_relid = InvalidOid;
entry->map = NULL; /* will be set by maybe_send_schema() if
* needed */
@@ -1140,6 +1199,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
List *pubids = GetRelationPublications(relid);
ListCell *lc;
Oid publish_as_relid = relid;
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1163,12 +1223,23 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1219,10 +1290,12 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
}
if (entry->pubactions.pubinsert && entry->pubactions.pubupdate &&
- entry->pubactions.pubdelete && entry->pubactions.pubtruncate)
+ entry->pubactions.pubdelete && entry->pubactions.pubtruncate &&
+ entry->pubactions.pubsequence)
break;
}
@@ -1367,6 +1440,7 @@ rel_sync_cache_publication_cb(Datum arg, int cacheid, uint32 hashvalue)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
}
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 13d9994af3..3dd39054e6 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5495,6 +5495,7 @@ GetRelationPublicationActions(Relation relation)
pubactions->pubupdate |= pubform->pubupdate;
pubactions->pubdelete |= pubform->pubdelete;
pubactions->pubtruncate |= pubform->pubtruncate;
+ pubactions->pubsequence |= pubform->pubsequence;
ReleaseSysCache(tup);
@@ -5503,7 +5504,8 @@ GetRelationPublicationActions(Relation relation)
* other publications.
*/
if (pubactions->pubinsert && pubactions->pubupdate &&
- pubactions->pubdelete && pubactions->pubtruncate)
+ pubactions->pubdelete && pubactions->pubtruncate &&
+ pubactions->pubsequence)
break;
}
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index d6bf725971..6f77354644 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1640,7 +1640,7 @@ psql_completion(const char *text, int start, int end)
/* ALTER PUBLICATION <name> */
else if (Matches("ALTER", "PUBLICATION", MatchAny))
- COMPLETE_WITH("ADD TABLE", "DROP TABLE", "OWNER TO", "RENAME TO", "SET");
+ COMPLETE_WITH("ADD TABLE", "DROP TABLE", "ADD SEQUENCE", "DROP SEQUENCE", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
COMPLETE_WITH("(", "TABLE");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8cd0252082..8700a72dea 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11445,6 +11445,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index f332bad4d4..150b8922cf 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct Publication
@@ -79,6 +89,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -107,6 +118,9 @@ extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
extern List *GetAllTablesPublications(void);
extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
+extern List *GetPublicationSequenceRelations(Oid pubid);
+
extern bool is_publishable_relation(Relation rel);
extern ObjectAddress publication_add_relation(Oid pubid, Relation targetrel,
bool if_not_exists);
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 40544dd4c7..c28e8695cb 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -48,6 +48,7 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
typedef struct xl_seq_rec
{
RelFileNode node;
+ bool created; /* is this a CREATE SEQUENCE */
/* SEQUENCE TUPLE DATA FOLLOWS AT THE END */
} xl_seq_rec;
@@ -59,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index def9651b34..aa602deae5 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3643,8 +3643,10 @@ typedef struct AlterPublicationStmt
/* parameters used for ALTER PUBLICATION ... ADD/DROP TABLE */
List *tables; /* List of tables to add/drop */
+ List *sequences; /* List of sequences to add/drop */
bool for_all_tables; /* Special publication for all tables in db */
- DefElemAction tableAction; /* What action to perform with the tables */
+ bool for_all_sequences; /* Special publication for all tables in db */
+ DefElemAction action; /* What action to perform with the tables/sequences */
} AlterPublicationStmt;
typedef struct CreateSubscriptionStmt
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 63de90d94a..931bc0722a 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -60,6 +60,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -116,6 +117,19 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ bool created;
+ int64 last_value;
+ int64 log_cnt; /* XXX probably not needed? */
+ bool is_called; /* XXX probably not needed? */
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -223,6 +237,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/output_plugin.h b/src/include/replication/output_plugin.h
index 810495ed0e..e3e1b03f32 100644
--- a/src/include/replication/output_plugin.h
+++ b/src/include/replication/output_plugin.h
@@ -88,6 +88,19 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
Size message_size,
const char *message);
+/*
+ * Called for the generic logical decoding sequences.
+ */
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ bool created,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Filter changes by origin.
*/
@@ -219,6 +232,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index 0dc460fb70..97eb2a7ef1 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -28,6 +28,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index 5b40ff75f7..4512ed679b 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -64,7 +64,8 @@ enum ReorderBufferChangeType
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT,
- REORDER_BUFFER_CHANGE_TRUNCATE
+ REORDER_BUFFER_CHANGE_TRUNCATE,
+ REORDER_BUFFER_CHANGE_SEQUENCE
};
/* forward declaration */
@@ -158,6 +159,14 @@ typedef struct ReorderBufferChange
uint32 ninvalidations; /* Number of messages */
SharedInvalidationMessage *invalidations; /* invalidation message */
} inval;
+
+ /* Context data for Sequence changes */
+ struct
+ {
+ RelFileNode relnode;
+ bool created;
+ ReorderBufferTupleBuf *tuple;
+ } sequence;
} data;
/*
@@ -430,6 +439,15 @@ typedef void (*ReorderBufferMessageCB) (ReorderBuffer *rb,
const char *prefix, Size sz,
const char *message);
+/* sequence callback signature */
+typedef void (*ReorderBufferSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* begin prepare callback signature */
typedef void (*ReorderBufferBeginPrepareCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn);
@@ -511,6 +529,12 @@ struct ReorderBuffer
*/
HTAB *by_txn;
+ /*
+ * relfilenode => XID lookup table for sequences created in a transaction
+ * (also includes altered sequences, which assigns new relfilenode)
+ */
+ HTAB *sequences;
+
/*
* Transactions that could be a toplevel xact, ordered by LSN of the first
* record bearing that xid.
@@ -541,6 +565,7 @@ struct ReorderBuffer
ReorderBufferApplyTruncateCB apply_truncate;
ReorderBufferCommitCB commit;
ReorderBufferMessageCB message;
+ ReorderBufferSequenceCB sequence;
/*
* Callbacks to be called when streaming a transaction at prepare time.
@@ -635,6 +660,10 @@ void ReorderBufferQueueChange(ReorderBuffer *, TransactionId,
void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapshot, XLogRecPtr lsn,
bool transactional, const char *prefix,
Size message_size, const char *message);
+void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
@@ -682,4 +711,7 @@ void ReorderBufferSetRestartPoint(ReorderBuffer *, XLogRecPtr ptr);
void StartupReorderBuffer(void);
+bool ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created);
+
#endif
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index b5b065a1b6..963ce8d1df 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -160,8 +160,8 @@ DROP TABLE testpub_parted1;
DROP PUBLICATION testpub_forparted, testpub_forparted1;
-- fail - view
CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_view;
-ERROR: "testpub_view" is not a table
-DETAIL: Only tables can be added to publications.
+ERROR: "testpub_view" is not a table or sequence
+DETAIL: Only tables and sequences can be added to publications.
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1, pub_test.testpub_nopk;
RESET client_min_messages;
@@ -182,8 +182,8 @@ Tables:
-- fail - view
ALTER PUBLICATION testpub_default ADD TABLE testpub_view;
-ERROR: "testpub_view" is not a table
-DETAIL: Only tables can be added to publications.
+ERROR: "testpub_view" is not a table or sequence
+DETAIL: Only tables and sequences can be added to publications.
ALTER PUBLICATION testpub_default ADD TABLE testpub_tbl1;
ALTER PUBLICATION testpub_default SET TABLE testpub_tbl1;
ALTER PUBLICATION testpub_default ADD TABLE pub_test.testpub_nopk;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index e5ab11275d..393be871be 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1451,6 +1451,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gpt(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gpt.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
--
2.31.1
Hi,
Here's a an updated version of this patch - rebased to current master,
and fixing some of the issues raised in Peter's review.
Mainly, this adds a TAP test to src/test/subscription, focusing on
testing the various situations with transactional and non-transactional
behavior (with subtransactions and various ROLLBACK versions).
This new TAP test however uncovered an issue with wait_for_catchup(),
because that uses pg_current_wal_lsn() to wait for replication of all
the changes. But that does not work when the sequence gets advanced in a
transaction that is then aborted, e.g. like this:
BEGIN;
SELECT nextval('s') FROM generate_series(1,100);
ROLLBACK;
The root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write,
which is updated by XLogFlush() - but only in RecordTransactionCommit.
Which makes sense, because only the committed stuff is "visible". But
the non-transactional behavior changes this, because now some of the
changes from aborted transactions may need to be replicated. Which means
the wait_for_catchup() ends up not waiting for the sequence change.
One option would be adding XLogFlush() to RecordTransactionAbort(), but
my guess is we don't do that intentionally (even though aborts should be
fairly rare in most workloads).
I'm not entirely sure changing this (replicating changes from aborted
xacts) is a good idea. Maybe it'd be better to replicate this "lazily" -
instead of replicating the advances right away, we might remember which
sequences were advanced in the transaction, and then replicate current
state for those sequences at commit time.
The idea is that if an increment is "invisible" we probably don't need
to replicate it, it's fine to replicate the next "visible" increment. So
for example given
BEGIN;
SELECT nextval('s');
ROLLBACK;
BEGIN;
SELECT nextval('s');
COMMIT;
we don't need to replicate the first change, but we need to replicate
the second one.
The trouble is we don't decode individual sequence advances, just those
that update the sequence tuple (so every 32 values or so). So we'd need
to remeber the first increment, in a way that is (a) persistent across
restarts and (b) shared by all backends.
The other challenge seems to be ordering of the changes - at the moment
we have no issues with this, because increments on the same sequence are
replicated immediately, in the WAL order. But postponing them to commit
time would affect this order.
I've also briefly looked at the code duplication - there's a couple
functions in the patch that I shamelessly copy-pasted and tweaked to
handle sequences instead of tables:
publicationcmds.c
-----------------
1) OpenTableList/CloseTableList - > OpenSequenceList/CloseSequenceList
Trivial differences, trivial to get rid of - the only difference is
pretty much just table_open vs. relation open.
2) AlterPublicationTables -> AlterPublicationSequences
This is a bit more complicated, because the tables also handle
inheritance (which I think does not apply to sequences). Other than
that, it's calling the functions from (1).
subscriptioncmds.c
------------------
1) fetch_table_list, fetch_sequence_list
Minimal differences, essentially just the catalog name.
2) AlterSubscription_refresh
A lot of duplication, but the code handling tables and sequences is
almost exactly the same and can be reused fairly easily (moved to a
separate function, called for tables and then sequences).
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Logical-decoding-replication-of-sequences-20210729.patchtext/x-patch; charset=UTF-8; name=0001-Logical-decoding-replication-of-sequences-20210729.patchDownload
diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index 9a31e0b879..56ddc3abae 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -5,7 +5,8 @@ PGFILEDESC = "test_decoding - example of a logical decoding output plugin"
REGRESS = ddl xact rewrite toast permissions decoding_in_xact \
decoding_into_rel binary prepared replorigin time messages \
- spill slot truncate stream stats twophase twophase_stream
+ spill slot truncate stream stats twophase twophase_stream \
+ sequence
ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \
oldest_xmin snapshot_transfer subxact_without_top concurrent_stream \
twophase_snapshot
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
new file mode 100644
index 0000000000..07f3e3e92c
--- /dev/null
+++ b/contrib/test_decoding/expected/sequence.out
@@ -0,0 +1,327 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+ ?column?
+----------
+ init
+(1 row)
+
+CREATE SEQUENCE test_sequence;
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4
+(1 row)
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 14
+(1 row)
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4000
+(1 row)
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, true);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, false);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3500
+(1 row)
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ COMMIT
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 4, log_cnt: 0 is_called: 1
+ COMMIT
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 334, log_cnt: 0 is_called: 1
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 14, log_cnt: 32 is_called: 1
+ COMMIT
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 14, log_cnt: 0 is_called: 1
+ COMMIT
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 4000, log_cnt: 0 is_called: 0
+ COMMIT
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 4320, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3500, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3830, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3500, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3830, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3500, log_cnt: 0 is_called: 0
+ sequence: public.test_sequence transactional: 0 created: 0 last_value: 3820, log_cnt: 0 is_called: 1
+(24 rows)
+
+DROP SEQUENCE test_sequence;
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3002
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3003
+(1 row)
+
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 6001
+(1 row)
+
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3003 | 30 | t
+(1 row)
+
+SELECT nextval('test_table_a_seq');
+ nextval
+---------
+ 3004
+(1 row)
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+---------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_table_a_seq transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ COMMIT
+ sequence: public.test_table_a_seq transactional: 0 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ sequence: public.test_table_a_seq transactional: 0 created: 0 last_value: 3000, log_cnt: 0 is_called: 1
+ sequence: public.test_table_a_seq transactional: 0 created: 0 last_value: 3033, log_cnt: 0 is_called: 1
+ BEGIN
+ COMMIT
+(8 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+-------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_table_a_seq transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ sequence: public.test_table_a_seq transactional: 1 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ COMMIT
+(6 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ setval
+--------
+ 5000
+(1 row)
+
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3001 | 32 | t
+(1 row)
+
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+------------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0
+ sequence: public.test_sequence transactional: 1 created: 0 last_value: 33, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 1 created: 0 last_value: 3000, log_cnt: 0 is_called: 1
+ sequence: public.test_sequence transactional: 1 created: 0 last_value: 3033, log_cnt: 0 is_called: 1
+ COMMIT
+(6 rows)
+
+SELECT pg_drop_replication_slot('regression_slot');
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
new file mode 100644
index 0000000000..d8a34738f3
--- /dev/null
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -0,0 +1,119 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+
+CREATE SEQUENCE test_sequence;
+
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, true);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, false);
+SELECT nextval('test_sequence');
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+DROP SEQUENCE test_sequence;
+
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+SELECT nextval('test_table_a_seq');
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index e5cd84e85e..54d734cb85 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -35,6 +35,7 @@ typedef struct
bool include_timestamp;
bool skip_empty_xacts;
bool only_local;
+ bool skip_sequences;
} TestDecodingData;
/*
@@ -76,6 +77,10 @@ static void pg_decode_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
TransactionId xid,
const char *gid);
@@ -141,6 +146,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->filter_by_origin_cb = pg_decode_filter;
cb->shutdown_cb = pg_decode_shutdown;
cb->message_cb = pg_decode_message;
+ cb->sequence_cb = pg_decode_sequence;
cb->filter_prepare_cb = pg_decode_filter_prepare;
cb->begin_prepare_cb = pg_decode_begin_prepare_txn;
cb->prepare_cb = pg_decode_prepare_txn;
@@ -175,6 +181,9 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
data->skip_empty_xacts = false;
data->only_local = false;
+ /* skip sequences by default for backwards compatibility */
+ data->skip_sequences = true;
+
ctx->output_plugin_private = data;
opt->output_type = OUTPUT_PLUGIN_TEXTUAL_OUTPUT;
@@ -265,6 +274,17 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "skip-sequences") == 0)
+ {
+
+ if (elem->arg == NULL)
+ continue; /* true by default */
+ else if (!parse_bool(strVal(elem->arg), &data->skip_sequences))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not parse value \"%s\" for parameter \"%s\"",
+ strVal(elem->arg), elem->defname)));
+ }
else
{
ereport(ERROR,
@@ -744,6 +764,26 @@ pg_decode_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+ char *nspname = get_namespace_name(RelationGetNamespace(rel));
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfo(ctx->out, "sequence: %s.%s transactional: %d created: %d last_value: %zu, log_cnt: %zu is_called: %d",
+ nspname, RelationGetRelationName(rel),
+ transactional, created, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
static void
pg_decode_stream_start(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn)
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 2b2c70a26e..a342a54629 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -9430,6 +9430,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11264,6 +11269,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index faa114b2c6..c68a1573de 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -24,6 +24,9 @@ PostgreSQL documentation
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> ADD TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> DROP TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> ADD SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> DROP SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> OWNER TO { <replaceable>new_owner</replaceable> | CURRENT_ROLE | CURRENT_USER | SESSION_USER }
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <replaceable>new_name</replaceable>
@@ -50,7 +53,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -62,7 +76,8 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<para>
You must own the publication to use <command>ALTER PUBLICATION</command>.
- Adding a table to a publication additionally requires owning that table.
+ Adding a table to a publication additionally requires owning that table,
+ and the same requirement applies to sequences.
To alter the owner, you must also be a direct or indirect member of the new
owning role. The new owner must have <literal>CREATE</literal> privilege on
the database. Also, the new owner of a <literal>FOR ALL TABLES</literal>
@@ -97,6 +112,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index a6f994450d..ff1dcd3bfd 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -144,8 +144,8 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<term><literal>REFRESH PUBLICATION</literal></term>
<listitem>
<para>
- Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ Fetch missing table and sequence information from publisher. This will start
+ replication of tables and sequences that were added to the subscribed-to publications
since the last invocation of <command>REFRESH PUBLICATION</command> or
since <command>CREATE SUBSCRIPTION</command>.
</para>
@@ -162,7 +162,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
Specifies whether the existing data in the publications that are
being subscribed to should be copied once the replication starts.
The default is <literal>true</literal>. (Previously subscribed
- tables are not copied.)
+ tables and sequences are not copied.)
</para>
</listitem>
</varlistentry>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 2a2fe03c13..a66a5a2aec 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -51,7 +51,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -98,7 +99,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -271,6 +273,10 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+ /* skip sequences here */
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ continue;
+
if (get_rel_relkind(pubrel->prrelid) == RELKIND_PARTITIONED_TABLE &&
pub_partopt != PUBLICATION_PART_ROOT)
{
@@ -304,6 +310,49 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of relation oids for a publication (sequences only).
+ *
+ * This should only be used for normal publications, the FOR ALL TABLES
+ * should use GetAllSequencesPublicationRelations().
+ */
+List *
+GetPublicationSequenceRelations(Oid pubid)
+{
+ List *result;
+ Relation pubrelsrel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications associated with the relation. */
+ pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_rel_prpubid,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(pubid));
+
+ scan = systable_beginscan(pubrelsrel, PublicationRelPrrelidPrpubidIndexId,
+ true, NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Form_pg_publication_rel pubrel;
+
+ pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ }
+
+ systable_endscan(scan);
+ table_close(pubrelsrel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of publication oids for publications marked as FOR ALL TABLES.
*/
@@ -404,6 +453,46 @@ GetAllTablesPublicationRelations(bool pubviaroot)
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -426,10 +515,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -555,3 +646,56 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ sequences = GetPublicationSequenceRelations(publication->oid);
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 55f6e3711d..9bec49bc3e 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -372,6 +372,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPT.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 8487eeb7e6..90ad83d4a1 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -54,6 +55,12 @@ static void PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt);
static void PublicationDropTables(Oid pubid, List *rels, bool missing_ok);
+static List *OpenSequenceList(List *sequences);
+static void CloseSequenceList(List *rels);
+static void PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt);
+static void PublicationDropSequences(Oid pubid, List *rels, bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
List *options,
@@ -72,6 +79,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -96,6 +104,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -118,6 +127,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -208,6 +219,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -373,9 +386,9 @@ AlterPublicationTables(AlterPublicationStmt *stmt, Relation rel,
rels = OpenTableList(stmt->tables);
- if (stmt->tableAction == DEFELEM_ADD)
+ if (stmt->action == DEFELEM_ADD)
PublicationAddTables(pubid, rels, false, stmt);
- else if (stmt->tableAction == DEFELEM_DROP)
+ else if (stmt->action == DEFELEM_DROP)
PublicationDropTables(pubid, rels, false);
else /* DEFELEM_SET */
{
@@ -426,6 +439,82 @@ AlterPublicationTables(AlterPublicationStmt *stmt, Relation rel,
CloseTableList(rels);
}
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, Relation rel,
+ HeapTuple tup)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ Assert(list_length(stmt->sequences) > 0);
+
+ rels = OpenSequenceList(stmt->sequences);
+
+ if (stmt->action == DEFELEM_ADD)
+ PublicationAddSequences(pubid, rels, false, stmt);
+ else if (stmt->action == DEFELEM_DROP)
+ PublicationDropSequences(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ Relation newrel = (Relation) lfirst(newlc);
+
+ if (RelationGetRelid(newrel) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+
+ if (!found)
+ {
+ Relation oldrel = relation_open(oldrelid,
+ ShareUpdateExclusiveLock);
+
+ delrels = lappend(delrels, oldrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropSequences(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddSequences(pubid, rels, true, stmt);
+
+ CloseSequenceList(delrels);
+ }
+
+ CloseSequenceList(rels);
+}
+
/*
* Alter the existing publication.
*
@@ -459,8 +548,12 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
if (stmt->options)
AlterPublicationOptions(pstate, stmt, rel, tup);
- else
+ else if (stmt->tables)
AlterPublicationTables(stmt, rel, tup);
+ else if (stmt->sequences)
+ AlterPublicationSequences(stmt, rel, tup);
+ else
+ Assert(false);
/* Cleanup. */
heap_freetuple(tup);
@@ -665,6 +758,139 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
}
}
+/*
+ * Open relations specified by a RangeVar list.
+ * The returned tables are locked in ShareUpdateExclusiveLock mode in order to
+ * add them to a publication.
+ */
+static List *
+OpenSequenceList(List *sequences)
+{
+ List *relids = NIL;
+ List *rels = NIL;
+ ListCell *lc;
+
+ /*
+ * Open, share-lock, and check all the explicitly-specified relations
+ */
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = castNode(RangeVar, lfirst(lc));
+ Relation rel;
+ Oid myrelid;
+
+ /* Allow query cancel in case this takes a long time */
+ CHECK_FOR_INTERRUPTS();
+
+ rel = relation_openrv(rv, ShareUpdateExclusiveLock);
+ myrelid = RelationGetRelid(rel);
+
+ /*
+ * Filter out duplicates if user specifies "foo, foo".
+ *
+ * Note that this algorithm is known to not be very efficient (O(N^2))
+ * but given that it only works on list of sequences given to us by user
+ * it's deemed acceptable.
+ */
+ if (list_member_oid(relids, myrelid))
+ {
+ relation_close(rel, ShareUpdateExclusiveLock);
+ continue;
+ }
+
+ rels = lappend(rels, rel);
+ relids = lappend_oid(relids, myrelid);
+ }
+
+ list_free(relids);
+
+ return rels;
+}
+
+/*
+ * Close all relations in the list.
+ */
+static void
+CloseSequenceList(List *rels)
+{
+ ListCell *lc;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+
+ relation_close(rel, NoLock);
+ }
+}
+
+/*
+ * Add listed tables to the publication.
+ */
+static void
+PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt)
+{
+ ListCell *lc;
+
+ Assert(!stmt || !stmt->for_all_sequences);
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ ObjectAddress obj;
+
+ /* Must be owner of the table or superuser. */
+ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind),
+ RelationGetRelationName(rel));
+
+ obj = publication_add_relation(pubid, rel, if_not_exists);
+ if (stmt)
+ {
+ EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
+ (Node *) stmt);
+
+ InvokeObjectPostCreateHook(PublicationRelRelationId,
+ obj.objectId, 0);
+ }
+ }
+}
+
+/*
+ * Remove listed sequences from the publication.
+ */
+static void
+PublicationDropSequences(Oid pubid, List *rels, bool missing_ok)
+{
+ ObjectAddress obj;
+ ListCell *lc;
+ Oid prid;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ Oid relid = RelationGetRelid(rel);
+
+ prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
+ ObjectIdGetDatum(relid),
+ ObjectIdGetDatum(pubid));
+ if (!OidIsValid(prid))
+ {
+ if (missing_ok)
+ continue;
+
+ ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+ errmsg("relation \"%s\" is not part of the publication",
+ RelationGetRelationName(rel))));
+ }
+
+ ObjectAddressSet(obj, PublicationRelRelationId, prid);
+ performDeletion(&obj, DROP_CASCADE, 0);
+ }
+}
+
+
/*
* Internal workhorse for changing a publication owner
*/
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 72bfdc07a4..2fc644c74c 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -94,7 +94,7 @@ static HTAB *seqhashtab = NULL; /* hash table for SeqTable items */
*/
static SeqTableData *last_used_seq = NULL;
-static void fill_seq_with_data(Relation rel, HeapTuple tuple);
+static void fill_seq_with_data(Relation rel, HeapTuple tuple, bool create);
static Relation lock_and_open_sequence(SeqTable seq);
static void create_seq_hashtable(void);
static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
@@ -222,7 +222,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
/* now initialize the sequence's data */
tuple = heap_form_tuple(tupDesc, value, null);
- fill_seq_with_data(rel, tuple);
+ fill_seq_with_data(rel, tuple, true);
/* process OWNED BY if given */
if (owned_by)
@@ -327,7 +327,86 @@ ResetSequence(Oid seq_relid)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seq_rel, tuple);
+ fill_seq_with_data(seq_rel, tuple, true);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple, false);
/* Clear local cache so that we don't think we have cached numbers */
/* Note that we do not change the currval() state */
@@ -340,7 +419,7 @@ ResetSequence(Oid seq_relid)
* Initialize a sequence's relation with the specified tuple as content
*/
static void
-fill_seq_with_data(Relation rel, HeapTuple tuple)
+fill_seq_with_data(Relation rel, HeapTuple tuple, bool create)
{
Buffer buf;
Page page;
@@ -378,8 +457,21 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(rel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX Not sure if this is the best solution.
+ */
+ GetCurrentTransactionId();
+ }
+
START_CRIT_SECTION();
MarkBufferDirty(buf);
@@ -399,6 +491,7 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = rel->rd_node;
+ xlrec.created = create;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) tuple->t_data, tuple->t_len);
@@ -502,7 +595,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seqrel, newdatatuple);
+ fill_seq_with_data(seqrel, newdatatuple, true);
}
/* process OWNED BY if given */
@@ -766,8 +859,21 @@ nextval_internal(Oid relid, bool check_permissions)
* (Have to do that here, so we're outside the critical section)
*/
if (logit && RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX Not sure if this is the best solution.
+ */
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -803,6 +909,7 @@ nextval_internal(Oid relid, bool check_permissions)
seq->log_cnt = 0;
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -977,8 +1084,21 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX Not sure if this is the best solution.
+ */
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -999,6 +1119,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
+
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 22ae982328..2d03cf011e 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -84,6 +84,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -521,6 +522,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -559,6 +561,26 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -731,6 +753,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -822,6 +848,183 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1635,6 +1838,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 574d7d27fd..6f0d42d551 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -608,7 +608,7 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 29020c908e..04bb9a27a8 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4834,8 +4834,10 @@ _copyAlterPublicationStmt(const AlterPublicationStmt *from)
COPY_STRING_FIELD(pubname);
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(tables);
+ COPY_NODE_FIELD(sequences);
COPY_SCALAR_FIELD(for_all_tables);
- COPY_SCALAR_FIELD(tableAction);
+ COPY_SCALAR_FIELD(for_all_sequences);
+ COPY_SCALAR_FIELD(action);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 8a1762000c..ebf48f88f1 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2308,8 +2308,10 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_STRING_FIELD(pubname);
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(tables);
+ COMPARE_NODE_FIELD(sequences);
COMPARE_SCALAR_FIELD(for_all_tables);
- COMPARE_SCALAR_FIELD(tableAction);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
+ COMPARE_SCALAR_FIELD(action);
return true;
}
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 39a2849eba..d42be08c0c 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -434,6 +434,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
%type <node> group_by_item empty_grouping_set rollup_clause cube_clause
%type <node> grouping_sets_clause
%type <node> opt_publication_for_tables publication_for_tables
+/* %type <node> opt_publication_for_sequences publication_for_sequences */
%type <list> opt_fdw_options fdw_options
%type <defelt> fdw_option
@@ -9596,6 +9597,7 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
*****************************************************************************/
CreatePublicationStmt:
+ /* CREATE PUBLICATION name opt_publication_for_tables opt_publication_for_sequences opt_definition */
CREATE PUBLICATION name opt_publication_for_tables opt_definition
{
CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
@@ -9630,6 +9632,12 @@ publication_for_tables:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
@@ -9641,6 +9649,12 @@ publication_for_tables:
*
* ALTER PUBLICATION name SET TABLE table [, table2]
*
+ * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]
+ *
*****************************************************************************/
AlterPublicationStmt:
@@ -9656,7 +9670,7 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_ADD;
+ n->action = DEFELEM_ADD;
$$ = (Node *)n;
}
| ALTER PUBLICATION name SET TABLE relation_expr_list
@@ -9664,7 +9678,7 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_SET;
+ n->action = DEFELEM_SET;
$$ = (Node *)n;
}
| ALTER PUBLICATION name DROP TABLE relation_expr_list
@@ -9672,7 +9686,31 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_DROP;
+ n->action = DEFELEM_DROP;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name ADD_P SEQUENCE relation_expr_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_ADD;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name SET SEQUENCE relation_expr_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_SET;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name DROP SEQUENCE relation_expr_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_DROP;
$$ = (Node *)n;
}
;
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 2874dc0612..98d0edefb0 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -42,6 +42,7 @@
#include "replication/reorderbuffer.h"
#include "replication/snapbuild.h"
#include "storage/standby.h"
+#include "commands/sequence.h"
typedef struct XLogRecordBuffer
{
@@ -74,10 +75,11 @@ static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
bool two_phase);
static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_prepare *parsed);
-
+static void DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
/* common function to decode tuples */
static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);
+static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);
/* helper functions for decoding transactions */
static inline bool FilterPrepare(LogicalDecodingContext *ctx,
@@ -158,6 +160,10 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
DecodeLogicalMsgOp(ctx, &buf);
break;
+ case RM_SEQ_ID:
+ DecodeSequence(ctx, &buf);
+ break;
+
/*
* Rmgrs irrelevant for logical decoding; they describe stuff not
* represented in logical decoding. Add new rmgrs in rmgrlist.h's
@@ -173,7 +179,6 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
case RM_HASH_ID:
case RM_GIN_ID:
case RM_GIST_ID:
- case RM_SEQ_ID:
case RM_SPGIST_ID:
case RM_BRIN_ID:
case RM_COMMIT_TS_ID:
@@ -1312,3 +1317,125 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
(txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
+
+/*
+ * Decode Sequence Tuple
+ */
+static void
+DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
+{
+ int datalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;
+
+ Assert(datalen >= 0);
+
+ tuple->tuple.t_len = datalen + SizeofHeapTupleHeader;;
+
+ ItemPointerSetInvalid(&tuple->tuple.t_self);
+
+ tuple->tuple.t_tableOid = InvalidOid;
+
+ memcpy(((char *) tuple->tuple.t_data),
+ data + sizeof(xl_seq_rec),
+ SizeofHeapTupleHeader);
+
+ memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,
+ data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,
+ datalen);
+}
+
+/*
+ * Handle sequence decode
+ *
+ * Decoding sequences is a bit tricky, because while most sequence actions
+ * are non-transactional (not subject to rollback), some need to be handled
+ * as transactional.
+ *
+ * By default, a sequence increment is non-transactional - we must not queue
+ * it in a transaction as other changes, because the transaction might get
+ * rolled back and we'd discard the increment. The downstream would not be
+ * notified about the increment, which is wrong.
+ *
+ * On the other hand, the sequence may be created in a transaction. In this
+ * case we *should* queue the change as other changes in the transaction,
+ * because we don't want to send the increments for unknown sequence to the
+ * plugin - it might get confused about which sequence it's related to etc.
+ */
+static void
+DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ SnapBuild *builder = ctx->snapshot_builder;
+ ReorderBufferTupleBuf *tuplebuf;
+ RelFileNode target_node;
+ XLogReaderState *r = buf->record;
+ char *tupledata = NULL;
+ Size tuplelen;
+ Size datalen = 0;
+ TransactionId xid = XLogRecGetXid(r);
+ uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
+ xl_seq_rec *xlrec;
+ Snapshot snapshot;
+ RepOriginId origin_id = XLogRecGetOrigin(r);
+ bool transactional;
+
+ /* only decode changes flagged with XLOG_SEQ_LOG */
+ if (info != XLOG_SEQ_LOG)
+ elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
+
+ /*
+ * If we don't have snapshot or we are just fast-forwarding, there is no
+ * point in decoding messages.
+ */
+ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||
+ ctx->fast_forward)
+ return;
+
+ /* only interested in our database */
+ XLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);
+ if (target_node.dbNode != ctx->slot->data.database)
+ return;
+
+ /* output plugin doesn't look for this origin, no need to queue */
+ if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
+ return;
+
+ tupledata = XLogRecGetData(r);
+ datalen = XLogRecGetDataLen(r);
+ tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);
+
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_seq_rec *) XLogRecGetData(r);
+
+ /* XXX how could we have sequence change without data? */
+ if(!datalen || !tupledata)
+ return;
+
+ tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
+ DecodeSeqTuple(tupledata, datalen, tuplebuf);
+
+ /*
+ * Should we handle the sequence increment as transactional or not?
+ *
+ * If the sequence was created in a still-running transaction, treat
+ * it as transactional and queue the increments. Otherwise it needs
+ * to be treated as non-transactional, in which case we send it to
+ * the plugin right away.
+ */
+ transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
+ target_node,
+ xlrec->created);
+
+ /* Skip the change if already processed (per the snapshot). */
+ if (transactional &&
+ !SnapBuildProcessChange(builder, xid, buf->origptr))
+ return;
+ else if (!transactional &&
+ (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
+ SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ return;
+
+ /* Queue the increment (or send immediately if not transactional). */
+ snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
+ ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
+ origin_id, target_node, transactional,
+ xlrec->created, tuplebuf);
+}
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index 8dcb564af8..a74dd75fde 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -73,6 +73,10 @@ static void truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
static void message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
/* streaming callbacks */
static void stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
@@ -217,6 +221,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->apply_truncate = truncate_cb_wrapper;
ctx->reorder->commit = commit_cb_wrapper;
ctx->reorder->message = message_cb_wrapper;
+ ctx->reorder->sequence = sequence_cb_wrapper;
/*
* To support streaming, we require start/stop/abort/commit/change
@@ -1198,6 +1203,43 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ if (ctx->callbacks.sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ created, last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr first_lsn)
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 2d774567e0..d42c749cfa 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -622,6 +622,58 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint8(out, created);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->created = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 7378beb684..f99e45f22d 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -116,6 +116,13 @@ typedef struct ReorderBufferTXNByIdEnt
ReorderBufferTXN *txn;
} ReorderBufferTXNByIdEnt;
+/* entry for hash table we use to track sequences created in running xacts */
+typedef struct ReorderBufferSequenceEnt
+{
+ RelFileNode rnode;
+ TransactionId xid;
+} ReorderBufferSequenceEnt;
+
/* data structures for (relfilenode, ctid) => (cmin, cmax) mapping */
typedef struct ReorderBufferTupleCidKey
{
@@ -338,6 +345,14 @@ ReorderBufferAllocate(void)
buffer->by_txn = hash_create("ReorderBufferByXid", 1000, &hash_ctl,
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+ /* hash table of sequences, mapping relfilenode to XID of transaction */
+ hash_ctl.keysize = sizeof(RelFileNode);
+ hash_ctl.entrysize = sizeof(ReorderBufferSequenceEnt);
+ hash_ctl.hcxt = buffer->context;
+
+ buffer->sequences = hash_create("ReorderBufferSequenceHash", 1000, &hash_ctl,
+ HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+
buffer->by_txn_last_xid = InvalidTransactionId;
buffer->by_txn_last_txn = NULL;
@@ -528,6 +543,13 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
break;
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
+ change->data.sequence.tuple = NULL;
+ }
+ break;
}
pfree(change);
@@ -856,6 +878,212 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
}
}
+/*
+ * Treat the sequence increment as transactional?
+ */
+bool
+ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created)
+{
+ bool found = false;
+
+ if (created)
+ return true;
+
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND,
+ &found);
+
+ return found;
+}
+
+
+/*
+ * A transactional sequence increment is queued to be processed upon commit
+ * and a non-transactional increment gets processed immediately.
+ *
+ * A sequence update may be both transactional and non-transactional. When
+ * created in a running transaction, treat it as transactional and queue
+ * the change in it. Otherwise treat it as non-transactional, so that we
+ * don't forget the increment in case of a rollback.
+ */
+void
+ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf)
+{
+ /*
+ * Change needs to be handled as transactional, because the sequence was
+ * created in a transaction that is still running. In that case all the
+ * changes need to be queued in that transaction, we must not send them
+ * to the downstream until the transaction commits.
+ *
+ * There's a bit of a trouble with subtransactions - we can't queue it
+ * into the subxact, because it might be rolled back and we'd lose the
+ * increment. We need to queue it into the same (sub)xact that created
+ * the sequence, which is why we track the XID in the hash table.
+ */
+ if (transactional)
+ {
+ MemoryContext oldcontext;
+ ReorderBufferChange *change;
+
+ /* lookup sequence by relfilenode */
+ ReorderBufferSequenceEnt *ent;
+ bool found;
+
+ /* transactional changes require a transaction */
+ Assert(xid != InvalidTransactionId);
+
+ /* search the lookup table (we ignore the return value, found is enough) */
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ created ? HASH_ENTER : HASH_FIND,
+ &found);
+
+ /*
+ * If this is the "create" increment, we must not have found any
+ * pre-existing entry in the hash table (i.e. there must not be
+ * any conflicting sequence).
+ */
+ Assert(!(created && found));
+
+ /* But we must have either created or found an existing entry. */
+ Assert(created || found);
+
+ /*
+ * When creating the sequence, remember the XID of the transaction
+ * that created id.
+ */
+ if (created)
+ ent->xid = xid;
+
+ /* XXX Maybe check that we're still in the same top-level xact? */
+
+ /* OK, allocate and queue the change */
+ oldcontext = MemoryContextSwitchTo(rb->context);
+
+ change = ReorderBufferGetChange(rb);
+
+ change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
+ change->origin_id = origin_id;
+
+ memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
+
+ change->data.sequence.created = created;
+ change->data.sequence.tuple = tuplebuf;
+
+ /* add it to the same subxact that created the sequence */
+ ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+ else
+ {
+ /*
+ * This increment is for a sequence that was not created in any
+ * running transaction, so we treat it as non-transactional and
+ * just send it to the output plugin directly.
+ */
+ ReorderBufferTXN *txn = NULL;
+ volatile Snapshot snapshot_now = snapshot;
+ bool using_subtxn;
+
+#ifdef USE_ASSERT_CHECKING
+ /* All "creates" have to be handled as transactional. */
+ Assert(!created);
+
+ /* Make sure the sequence is not in the hash table. */
+ {
+ bool found;
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND, &found);
+ Assert(!found);
+ }
+#endif
+
+ if (xid != InvalidTransactionId)
+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
+
+ /* setup snapshot to allow catalog access */
+ SetupHistoricSnapshot(snapshot_now, NULL);
+
+ /*
+ * Decoding needs access to syscaches et al., which in turn use
+ * heavyweight locks and such. Thus we need to have enough state around to
+ * keep track of those. The easiest way is to simply use a transaction
+ * internally. That also allows us to easily enforce that nothing writes
+ * to the database by checking for xid assignments.
+ *
+ * When we're called via the SQL SRF there's already a transaction
+ * started, so start an explicit subtransaction there.
+ */
+ using_subtxn = IsTransactionOrTransactionBlock();
+
+ PG_TRY();
+ {
+ Relation relation;
+ HeapTuple tuple;
+ bool isnull;
+ int64 last_value, log_cnt;
+ bool is_called;
+ Oid reloid;
+
+ if (using_subtxn)
+ BeginInternalSubTransaction("sequence");
+ else
+ StartTransactionCommand();
+
+ reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(rnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+ tuple = &tuplebuf->tuple;
+
+ /*
+ * Extract the internal sequence values, describing the state.
+ *
+ * XXX Seems a bit strange to access it directly. Maybe there's
+ * a better / more correct way?
+ */
+ last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
+ log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
+ is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+
+ rb->sequence(rb, txn, lsn, relation, transactional, created,
+ last_value, log_cnt, is_called);
+
+ RelationClose(relation);
+
+ TeardownHistoricSnapshot(false);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+ }
+ PG_CATCH();
+ {
+ TeardownHistoricSnapshot(true);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+ }
+}
+
/*
* AssertTXNLsnOrder
* Verify LSN ordering of transaction lists in the reorderbuffer
@@ -1532,6 +1760,31 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
&found);
Assert(found);
+ /*
+ * Remove sequences created in this transaction (if any).
+ *
+ * There's no way to search by XID, so we simply do a seqscan of all
+ * the entries in the hash table. Hopefully there are only a couple
+ * entries in most cases - people generally don't create many new
+ * sequences over and over.
+ */
+ {
+ HASH_SEQ_STATUS scan_status;
+ ReorderBufferSequenceEnt *ent;
+
+ hash_seq_init(&scan_status, rb->sequences);
+ while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
+ {
+ /* skip sequences not from this transaction */
+ if (ent->xid != txn->xid)
+ continue;
+
+ (void) hash_search(rb->sequences,
+ (void *) &(ent->rnode),
+ HASH_REMOVE, NULL);
+ }
+ }
+
/* remove entries spilled to disk */
if (rbtxn_is_serialized(txn))
ReorderBufferRestoreCleanup(rb, txn);
@@ -1947,6 +2200,39 @@ ReorderBufferApplyMessage(ReorderBuffer *rb, ReorderBufferTXN *txn,
change->data.msg.message);
}
+/*
+ * Helper function for ReorderBufferProcessTXN for applying sequences.
+ */
+static inline void
+ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ Relation relation, ReorderBufferChange *change,
+ bool streaming)
+{
+ HeapTuple tuple;
+ bool isnull;
+ int64 last_value, log_cnt;
+ bool is_called;
+
+ /* FIXME support streaming */
+ Assert(!streaming);
+
+ tuple = &change->data.sequence.tuple->tuple;
+
+ /*
+ * Extract the internal sequence values, describing the state.
+ *
+ * XXX Seems a bit strange to access it directly. Maybe there's
+ * a better / more correct way?
+ */
+ last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
+ log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
+ is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+
+ rb->sequence(rb, txn, change->lsn, relation, true, /* gotta be transactional */
+ change->data.sequence.created,
+ last_value, log_cnt, is_called);
+}
+
/*
* Function to store the command id and snapshot at the end of the current
* stream so that we can reuse the same while sending the next stream.
@@ -2389,6 +2675,31 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
elog(ERROR, "tuplecid value in changequeue");
break;
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ Assert(snapshot_now);
+
+ reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
+ change->data.sequence.relnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+
+ if (!RelationIsValid(relation))
+ elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
+ reloid,
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ if (RelationIsLogicallyLogged(relation))
+ ReorderBufferApplySequence(rb, txn, relation, change, streaming);
+
+ RelationClose(relation);
+ break;
}
}
@@ -3776,6 +4087,39 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ char *data;
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
+ /* make sure we have enough space */
+ ReorderBufferSerializeReserve(rb, sz);
+
+ data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
+ /* might have been reallocated above */
+ ondisk = (ReorderBufferDiskChange *) rb->outbuf;
+
+ if (len)
+ {
+ memcpy(data, &tup->tuple, sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ memcpy(data, tup->tuple.t_data, len);
+ data += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4040,6 +4384,22 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4341,6 +4701,30 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ uint32 tuplelen = ((HeapTuple) data)->t_len;
+
+ change->data.sequence.tuple =
+ ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
+
+ /* restore ->tuple */
+ memcpy(&change->data.sequence.tuple->tuple, data,
+ sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ /* reset t_data pointer into the new tuplebuf */
+ change->data.sequence.tuple->tuple.t_data =
+ ReorderBufferTupleBufData(change->data.sequence.tuple);
+
+ /* restore tuple data itself */
+ memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
+ data += tuplelen;
+ }
+ break;
+
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index f07983a43c..24369a1522 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -357,6 +358,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -871,6 +878,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ elog(WARNING, "sequence %s info last_value %ld log_cnt %ld is_called %d",
+ quote_qualified_identifier(lrel.nspname, lrel.relname),
+ last_value, log_cnt, is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1106,10 +1206,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 3f499b11f7..f4dbf5ed71 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -144,6 +144,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -1073,6 +1074,61 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by ResetSequence2 */
+ elog(WARNING, "locking sequence %d in exclusive mode", relid);
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ elog(WARNING, "applying sequence %s.%s transactional %d created %d last_value %ld log_cnt %ld is_called %d",
+ seq.nspname, seq.seqname, seq.transactional, seq.created, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2327,6 +2383,10 @@ apply_dispatch(StringInfo s)
*/
return;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
return;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index e4314af13a..4366c275dc 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -49,6 +49,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -157,6 +161,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -186,6 +191,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -193,6 +199,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -258,6 +265,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -865,6 +882,47 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(rel))
+ return;
+
+ /* XXX handle (in_streaming) here */
+
+ relentry = get_rel_sync_entry(data, RelationGetRelid(rel));
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ rel,
+ xid,
+ sequence_lsn,
+ transactional,
+ created,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1128,7 +1186,8 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->streamed_txns = NIL;
entry->replicate_valid = false;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->publish_as_relid = InvalidOid;
entry->map = NULL; /* will be set by maybe_send_schema() if
* needed */
@@ -1140,6 +1199,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
List *pubids = GetRelationPublications(relid);
ListCell *lc;
Oid publish_as_relid = relid;
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1163,12 +1223,23 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1219,10 +1290,12 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
}
if (entry->pubactions.pubinsert && entry->pubactions.pubupdate &&
- entry->pubactions.pubdelete && entry->pubactions.pubtruncate)
+ entry->pubactions.pubdelete && entry->pubactions.pubtruncate &&
+ entry->pubactions.pubsequence)
break;
}
@@ -1367,6 +1440,7 @@ rel_sync_cache_publication_cb(Datum arg, int cacheid, uint32 hashvalue)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
}
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 13d9994af3..3dd39054e6 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5495,6 +5495,7 @@ GetRelationPublicationActions(Relation relation)
pubactions->pubupdate |= pubform->pubupdate;
pubactions->pubdelete |= pubform->pubdelete;
pubactions->pubtruncate |= pubform->pubtruncate;
+ pubactions->pubsequence |= pubform->pubsequence;
ReleaseSysCache(tup);
@@ -5503,7 +5504,8 @@ GetRelationPublicationActions(Relation relation)
* other publications.
*/
if (pubactions->pubinsert && pubactions->pubupdate &&
- pubactions->pubdelete && pubactions->pubtruncate)
+ pubactions->pubdelete && pubactions->pubtruncate &&
+ pubactions->pubsequence)
break;
}
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 064892bade..fac6f9a563 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1640,7 +1640,7 @@ psql_completion(const char *text, int start, int end)
/* ALTER PUBLICATION <name> */
else if (Matches("ALTER", "PUBLICATION", MatchAny))
- COMPLETE_WITH("ADD TABLE", "DROP TABLE", "OWNER TO", "RENAME TO", "SET");
+ COMPLETE_WITH("ADD TABLE", "DROP TABLE", "ADD SEQUENCE", "DROP SEQUENCE", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
COMPLETE_WITH("(", "TABLE");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8cd0252082..8700a72dea 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11445,6 +11445,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index f332bad4d4..150b8922cf 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct Publication
@@ -79,6 +89,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -107,6 +118,9 @@ extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
extern List *GetAllTablesPublications(void);
extern List *GetAllTablesPublicationRelations(bool pubviaroot);
+extern List *GetAllSequencesPublicationRelations(void);
+extern List *GetPublicationSequenceRelations(Oid pubid);
+
extern bool is_publishable_relation(Relation rel);
extern ObjectAddress publication_add_relation(Oid pubid, Relation targetrel,
bool if_not_exists);
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 40544dd4c7..c28e8695cb 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -48,6 +48,7 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
typedef struct xl_seq_rec
{
RelFileNode node;
+ bool created; /* is this a CREATE SEQUENCE */
/* SEQUENCE TUPLE DATA FOLLOWS AT THE END */
} xl_seq_rec;
@@ -59,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index e28248af32..6e76c40f98 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3644,8 +3644,10 @@ typedef struct AlterPublicationStmt
/* parameters used for ALTER PUBLICATION ... ADD/DROP TABLE */
List *tables; /* List of tables to add/drop */
+ List *sequences; /* List of sequences to add/drop */
bool for_all_tables; /* Special publication for all tables in db */
- DefElemAction tableAction; /* What action to perform with the tables */
+ bool for_all_sequences; /* Special publication for all tables in db */
+ DefElemAction action; /* What action to perform with the tables/sequences */
} AlterPublicationStmt;
typedef struct CreateSubscriptionStmt
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 63de90d94a..931bc0722a 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -60,6 +60,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -116,6 +117,19 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ bool created;
+ int64 last_value;
+ int64 log_cnt; /* XXX probably not needed? */
+ bool is_called; /* XXX probably not needed? */
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -223,6 +237,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/output_plugin.h b/src/include/replication/output_plugin.h
index 810495ed0e..e3e1b03f32 100644
--- a/src/include/replication/output_plugin.h
+++ b/src/include/replication/output_plugin.h
@@ -88,6 +88,19 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
Size message_size,
const char *message);
+/*
+ * Called for the generic logical decoding sequences.
+ */
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ bool created,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Filter changes by origin.
*/
@@ -219,6 +232,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index 0dc460fb70..97eb2a7ef1 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -28,6 +28,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index 5b40ff75f7..4512ed679b 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -64,7 +64,8 @@ enum ReorderBufferChangeType
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT,
- REORDER_BUFFER_CHANGE_TRUNCATE
+ REORDER_BUFFER_CHANGE_TRUNCATE,
+ REORDER_BUFFER_CHANGE_SEQUENCE
};
/* forward declaration */
@@ -158,6 +159,14 @@ typedef struct ReorderBufferChange
uint32 ninvalidations; /* Number of messages */
SharedInvalidationMessage *invalidations; /* invalidation message */
} inval;
+
+ /* Context data for Sequence changes */
+ struct
+ {
+ RelFileNode relnode;
+ bool created;
+ ReorderBufferTupleBuf *tuple;
+ } sequence;
} data;
/*
@@ -430,6 +439,15 @@ typedef void (*ReorderBufferMessageCB) (ReorderBuffer *rb,
const char *prefix, Size sz,
const char *message);
+/* sequence callback signature */
+typedef void (*ReorderBufferSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* begin prepare callback signature */
typedef void (*ReorderBufferBeginPrepareCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn);
@@ -511,6 +529,12 @@ struct ReorderBuffer
*/
HTAB *by_txn;
+ /*
+ * relfilenode => XID lookup table for sequences created in a transaction
+ * (also includes altered sequences, which assigns new relfilenode)
+ */
+ HTAB *sequences;
+
/*
* Transactions that could be a toplevel xact, ordered by LSN of the first
* record bearing that xid.
@@ -541,6 +565,7 @@ struct ReorderBuffer
ReorderBufferApplyTruncateCB apply_truncate;
ReorderBufferCommitCB commit;
ReorderBufferMessageCB message;
+ ReorderBufferSequenceCB sequence;
/*
* Callbacks to be called when streaming a transaction at prepare time.
@@ -635,6 +660,10 @@ void ReorderBufferQueueChange(ReorderBuffer *, TransactionId,
void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapshot, XLogRecPtr lsn,
bool transactional, const char *prefix,
Size message_size, const char *message);
+void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
@@ -682,4 +711,7 @@ void ReorderBufferSetRestartPoint(ReorderBuffer *, XLogRecPtr ptr);
void StartupReorderBuffer(void);
+bool ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created);
+
#endif
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index e5ab11275d..393be871be 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1451,6 +1451,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gpt(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gpt.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/subscription/t/023_sequences.pl b/src/test/subscription/t/023_sequences.pl
new file mode 100644
index 0000000000..1f7768acd6
--- /dev/null
+++ b/src/test/subscription/t/023_sequences.pl
@@ -0,0 +1,196 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgresNode;
+use TestLib;
+use Test::More tests => 6;
+
+# Initialize publisher node
+my $node_publisher = PostgresNode->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgresNode->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub WITH (slot_name = seq_sub_slot)"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a rolled-back transaction - should be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the new sequence in a transaction, and roll it back - in this case
+# it should be replicated as the behavior is non-transactional
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+$node_subscriber->stop('fast');
+$node_publisher->stop('fast');
On Tue, Jul 20, 2021 at 5:41 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
I think the crucial aspect of this that needs discussion/feedback the
most is the transactional vs. non-transactional behavior. All the other
questions are less important / cosmetic.
Yeah, it seems really tricky to me to get this right. The hard part
is, I think, mostly figuring out what the right behavior really is.
DDL in PostgreSQL is transactional. Non-DDL operations on sequences
are non-transactional. If a given transaction does only one of those
things, it seems clear enough what to do, but when the same
(sub)transaction does both, it gets messy. I'd be tempted to think
about something like:
1. When a transaction performs only non-transactional operations on
sequences, they are emitted immediately.
2. If a transaction performs transactional operations on sequences,
the decoded operations acquire a dependency on the transaction and
cannot be emitted until that transaction is fully decoded. When commit
or abort of that XID is reached, emit the postponed non-transactional
operations at that point.
I think this is similar to what you've designed, but I'm not sure that
it's exactly equivalent. I think in particular that it may be better
to insist that all of these operations are non-transactional and that
the debate is only about when they can be sent, rather than trying to
sort of convert them into an equivalent series of transactional
operations. That approach seems confusing especially in the case where
some (sub)transactions abort.
But this is just my $0.02.
--
Robert Haas
EDB: http://www.enterprisedb.com
On 30.07.21 20:26, Tomas Vondra wrote:
Here's a an updated version of this patch - rebased to current master,
and fixing some of the issues raised in Peter's review.
This patch needs an update, as various conflicts have arisen now.
As was discussed before, it might be better to present a separate patch
for just the logical decoding part for now, since the replication and
DDL stuff has the potential to conflict heavily with other patches being
discussed right now. It looks like cutting this patch in two should be
doable easily.
I looked through the test cases in test_decoding again. It all looks
pretty sensible. If anyone can think of any other tricky or dubious
cases, we can add them there. It's easiest to discuss these things with
concrete test cases rather than in theory.
One slightly curious issue is that this can make sequence values go
backwards, when seen by the logical decoding consumer, like in the test
case:
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value:
1, log_cnt: 0 is_called: 0
+ COMMIT
+ sequence: public.test_sequence transactional: 0 created: 0 last_value:
33, log_cnt: 0 is_called: 1
+ BEGIN
+ sequence: public.test_sequence transactional: 1 created: 1 last_value:
4, log_cnt: 0 is_called: 1
+ COMMIT
+ sequence: public.test_sequence transactional: 0 created: 0 last_value:
334, log_cnt: 0 is_called: 1
I suppose that's okay, since it's not really the intention that someone
is concurrently consuming sequence values on the subscriber. Maybe
something for the future. Fixing that would require changing the way
transactional sequence DDL updates these values, so it's not directly
the job of the decoding to address this.
A small thing I found: Maybe the text that test_decoding produces for
sequences can be made to look more consistent with the one for tables.
For example, in
+ BEGIN
+ sequence: public.test_table_a_seq transactional: 1 created: 1
last_value: 1, log_cnt: 0 is_called: 0
+ sequence: public.test_table_a_seq transactional: 1 created: 0
last_value: 33, log_cnt: 0 is_called: 1
+ table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ COMMIT
note how the punctuation is different.
Hi,
On 9/23/21 12:27 PM, Peter Eisentraut wrote:
On 30.07.21 20:26, Tomas Vondra wrote:
Here's a an updated version of this patch - rebased to current master,
and fixing some of the issues raised in Peter's review.This patch needs an update, as various conflicts have arisen now.
As was discussed before, it might be better to present a separate patch
for just the logical decoding part for now, since the replication and
DDL stuff has the potential to conflict heavily with other patches being
discussed right now. It looks like cutting this patch in two should be
doable easily.
Attached is the rebased patch, split into three parts:
1) basic decoding infrastructure (decoding, reorderbuffer etc.)
2) support for sequences in test_decoding
3) support for sequences in built-in replication (catalogs, syntax, ...)
The last part is the largest one - I'm sure we'll have discussions about
the grammar, adding sequences automatically, etc. But as you said, let's
focus on the first part, which deals with the required decoding stuff.
I've added a couple comments, explaining how we track sequences, why we
need the XID in nextval() etc. I've also added streaming support.
I looked through the test cases in test_decoding again. It all looks
pretty sensible. If anyone can think of any other tricky or dubious
cases, we can add them there. It's easiest to discuss these things with
concrete test cases rather than in theory.One slightly curious issue is that this can make sequence values go
backwards, when seen by the logical decoding consumer, like in the test
case:+ BEGIN + sequence: public.test_sequence transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0 + COMMIT + sequence: public.test_sequence transactional: 0 created: 0 last_value: 33, log_cnt: 0 is_called: 1 + BEGIN + sequence: public.test_sequence transactional: 1 created: 1 last_value: 4, log_cnt: 0 is_called: 1 + COMMIT + sequence: public.test_sequence transactional: 0 created: 0 last_value: 334, log_cnt: 0 is_called: 1I suppose that's okay, since it's not really the intention that someone
is concurrently consuming sequence values on the subscriber. Maybe
something for the future. Fixing that would require changing the way
transactional sequence DDL updates these values, so it's not directly
the job of the decoding to address this.
Yeah, that's due to how ALTER SEQUENCE does things, and I agree redoing
that seems well out of scope for this patch. What seems a bit suspicious
is that some of the ALTER SEQUENCE changes have "created: 1" - it's
probably correct, though, because those ALTER SEQUENCE statements can be
rolled-back, so we see it as if a new sequence is created. The flag name
might be a bit confusing, though.
A small thing I found: Maybe the text that test_decoding produces for
sequences can be made to look more consistent with the one for tables.
For example, in+ BEGIN + sequence: public.test_table_a_seq transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0 + sequence: public.test_table_a_seq transactional: 1 created: 0 last_value: 33, log_cnt: 0 is_called: 1 + table public.test_table: INSERT: a[integer]:1 b[integer]:100 + table public.test_table: INSERT: a[integer]:2 b[integer]:200 + COMMITnote how the punctuation is different.
I did tweak this a bit, hopefully it's more consistent.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Logical-decoding-of-sequences-20210924.patchtext/x-patch; charset=UTF-8; name=0001-Logical-decoding-of-sequences-20210924.patchDownload
From 9b21a107dee280333e1f849134c22b2f4038d5b9 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 24 Sep 2021 00:41:33 +0200
Subject: [PATCH 1/3] Logical decoding of sequences
This adds the infrastructure for logical decoding of sequences. Support
for built-in logical replication and test_decoding is added separately.
When decoding sequences, we differentiate between a sequences created in
a (running) transaction, and sequences created in other (already
committed) transactions. Changes for sequences created in the same
top-level transaction are treated as "transactional" i.e. just like any
other change from that transaction (and discarded in case of a
rollback). Changes for sequences created earlier are treated as not
transactional - are processed immediately, as if performed outside any
transaction (and thus not rolled back).
This mixed behavior is necessary - sequences are non-transactional (e.g.
ROLLBACK does not undo the sequence increments). But for new sequences,
we need to handle them in a transactional way, because if we ever get
some DDL support, the sequence won't exist until the transaction gets
applied. So we need to ensure the increments don't happen until the
sequence gets created.
To differentiate which sequences are "old" and which were created in a
still-running transaction, we track sequences created in running
transactions in a hash table. Each sequence is identified by it's
relfilenode, and at transaction commit we discard all sequences created
in that particular transaction. For each sequence we track the XID of
the (sub)transaction that created it, and we cleanup the sequences for
each subtransaction when it completes (commit/rollback).
We don't use the XID to check if it's the same top-level transaction.
It's enough to know it was created in an in-progress transaction, and we
know it must be the current one because otherwise it wouldn't see the
sequence object.
The main changes in this patch are:
1) ensure WAL-logging of all necessary info for sequence advances
We need to be able to associate the advance with a XID, but until now
sequence advance might have XID 0 if it was the first thing the
transaction did. So we ensure the transaction has XID.
Note: This is needed because of subxacts. A XID 0 might still have the
sequence created in a different subxact of the same top-level xact.
Then there's a "created" flag added to the xl_seq_rec, which is
necessary to differentiate WAL for created/reset sequences.
2) decoding / queueing of sequences into ReorderBuffer
This is mostly copy-paste of existing code for other decoded events.
3) tracking sequences created in in-progress transactions
We use a simple hash table, indexed by relfilenode.
---
src/backend/commands/sequence.c | 83 +++-
src/backend/replication/logical/decode.c | 131 +++++-
src/backend/replication/logical/logical.c | 89 ++++
.../replication/logical/reorderbuffer.c | 424 ++++++++++++++++++
src/include/commands/sequence.h | 1 +
src/include/replication/output_plugin.h | 29 ++
src/include/replication/reorderbuffer.h | 44 +-
7 files changed, 793 insertions(+), 8 deletions(-)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 72bfdc07a4..a98fcc2e97 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -94,7 +94,7 @@ static HTAB *seqhashtab = NULL; /* hash table for SeqTable items */
*/
static SeqTableData *last_used_seq = NULL;
-static void fill_seq_with_data(Relation rel, HeapTuple tuple);
+static void fill_seq_with_data(Relation rel, HeapTuple tuple, bool create);
static Relation lock_and_open_sequence(SeqTable seq);
static void create_seq_hashtable(void);
static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
@@ -222,7 +222,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
/* now initialize the sequence's data */
tuple = heap_form_tuple(tupDesc, value, null);
- fill_seq_with_data(rel, tuple);
+ fill_seq_with_data(rel, tuple, true);
/* process OWNED BY if given */
if (owned_by)
@@ -327,7 +327,7 @@ ResetSequence(Oid seq_relid)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seq_rel, tuple);
+ fill_seq_with_data(seq_rel, tuple, true);
/* Clear local cache so that we don't think we have cached numbers */
/* Note that we do not change the currval() state */
@@ -340,7 +340,7 @@ ResetSequence(Oid seq_relid)
* Initialize a sequence's relation with the specified tuple as content
*/
static void
-fill_seq_with_data(Relation rel, HeapTuple tuple)
+fill_seq_with_data(Relation rel, HeapTuple tuple, bool create)
{
Buffer buf;
Page page;
@@ -378,8 +378,31 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(rel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX This might seem unnecessary, because if there's no XID the
+ * transaction couldn't have done anything important yet, e.g. it
+ * could not have created a sequence. But that's incorrect, as it
+ * ignores subtransactions. The current subtransaction might not
+ * have done anything yet (thus no XID), but an earlier one might
+ * have created the sequence.
+ *
+ * XXX Not sure if this is the best solution. Maybe do this only
+ * with wal_level=logical to minimize the overhead. OTOH advancing
+ * the sequence is likely followed by using the value(s) for some
+ * other activity, which assigns the XID.
+ */
+ GetCurrentTransactionId();
+ }
+
START_CRIT_SECTION();
MarkBufferDirty(buf);
@@ -399,6 +422,7 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = rel->rd_node;
+ xlrec.created = create;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) tuple->t_data, tuple->t_len);
@@ -502,7 +526,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seqrel, newdatatuple);
+ fill_seq_with_data(seqrel, newdatatuple, true);
}
/* process OWNED BY if given */
@@ -766,8 +790,31 @@ nextval_internal(Oid relid, bool check_permissions)
* (Have to do that here, so we're outside the critical section)
*/
if (logit && RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX This might seem unnecessary, because if there's no XID the
+ * transaction couldn't have done anything important yet, e.g. it
+ * could not have created a sequence. But that's incorrect, as it
+ * ignores subtransactions. The current subtransaction might not
+ * have done anything yet (thus no XID), but an earlier one might
+ * have created the sequence.
+ *
+ * XXX Not sure if this is the best solution. Maybe do this only
+ * with wal_level=logical to minimize the overhead. OTOH advancing
+ * the sequence is likely followed by using the value(s) for some
+ * other activity, which assigns the XID.
+ */
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -803,6 +850,7 @@ nextval_internal(Oid relid, bool check_permissions)
seq->log_cnt = 0;
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -977,8 +1025,31 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX This might seem unnecessary, because if there's no XID the
+ * transaction couldn't have done anything important yet, e.g. it
+ * could not have created a sequence. But that's incorrect, as it
+ * ignores subtransactions. The current subtransaction might not
+ * have done anything yet (thus no XID), but an earlier one might
+ * have created the sequence.
+ *
+ * XXX Not sure if this is the best solution. Maybe do this only
+ * with wal_level=logical to minimize the overhead. OTOH advancing
+ * the sequence is likely followed by using the value(s) for some
+ * other activity, which assigns the XID.
+ */
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -999,6 +1070,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
+
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 2874dc0612..98d0edefb0 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -42,6 +42,7 @@
#include "replication/reorderbuffer.h"
#include "replication/snapbuild.h"
#include "storage/standby.h"
+#include "commands/sequence.h"
typedef struct XLogRecordBuffer
{
@@ -74,10 +75,11 @@ static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
bool two_phase);
static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_prepare *parsed);
-
+static void DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
/* common function to decode tuples */
static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);
+static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);
/* helper functions for decoding transactions */
static inline bool FilterPrepare(LogicalDecodingContext *ctx,
@@ -158,6 +160,10 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
DecodeLogicalMsgOp(ctx, &buf);
break;
+ case RM_SEQ_ID:
+ DecodeSequence(ctx, &buf);
+ break;
+
/*
* Rmgrs irrelevant for logical decoding; they describe stuff not
* represented in logical decoding. Add new rmgrs in rmgrlist.h's
@@ -173,7 +179,6 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
case RM_HASH_ID:
case RM_GIN_ID:
case RM_GIST_ID:
- case RM_SEQ_ID:
case RM_SPGIST_ID:
case RM_BRIN_ID:
case RM_COMMIT_TS_ID:
@@ -1312,3 +1317,125 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
(txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
+
+/*
+ * Decode Sequence Tuple
+ */
+static void
+DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
+{
+ int datalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;
+
+ Assert(datalen >= 0);
+
+ tuple->tuple.t_len = datalen + SizeofHeapTupleHeader;;
+
+ ItemPointerSetInvalid(&tuple->tuple.t_self);
+
+ tuple->tuple.t_tableOid = InvalidOid;
+
+ memcpy(((char *) tuple->tuple.t_data),
+ data + sizeof(xl_seq_rec),
+ SizeofHeapTupleHeader);
+
+ memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,
+ data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,
+ datalen);
+}
+
+/*
+ * Handle sequence decode
+ *
+ * Decoding sequences is a bit tricky, because while most sequence actions
+ * are non-transactional (not subject to rollback), some need to be handled
+ * as transactional.
+ *
+ * By default, a sequence increment is non-transactional - we must not queue
+ * it in a transaction as other changes, because the transaction might get
+ * rolled back and we'd discard the increment. The downstream would not be
+ * notified about the increment, which is wrong.
+ *
+ * On the other hand, the sequence may be created in a transaction. In this
+ * case we *should* queue the change as other changes in the transaction,
+ * because we don't want to send the increments for unknown sequence to the
+ * plugin - it might get confused about which sequence it's related to etc.
+ */
+static void
+DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ SnapBuild *builder = ctx->snapshot_builder;
+ ReorderBufferTupleBuf *tuplebuf;
+ RelFileNode target_node;
+ XLogReaderState *r = buf->record;
+ char *tupledata = NULL;
+ Size tuplelen;
+ Size datalen = 0;
+ TransactionId xid = XLogRecGetXid(r);
+ uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
+ xl_seq_rec *xlrec;
+ Snapshot snapshot;
+ RepOriginId origin_id = XLogRecGetOrigin(r);
+ bool transactional;
+
+ /* only decode changes flagged with XLOG_SEQ_LOG */
+ if (info != XLOG_SEQ_LOG)
+ elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
+
+ /*
+ * If we don't have snapshot or we are just fast-forwarding, there is no
+ * point in decoding messages.
+ */
+ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||
+ ctx->fast_forward)
+ return;
+
+ /* only interested in our database */
+ XLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);
+ if (target_node.dbNode != ctx->slot->data.database)
+ return;
+
+ /* output plugin doesn't look for this origin, no need to queue */
+ if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
+ return;
+
+ tupledata = XLogRecGetData(r);
+ datalen = XLogRecGetDataLen(r);
+ tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);
+
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_seq_rec *) XLogRecGetData(r);
+
+ /* XXX how could we have sequence change without data? */
+ if(!datalen || !tupledata)
+ return;
+
+ tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
+ DecodeSeqTuple(tupledata, datalen, tuplebuf);
+
+ /*
+ * Should we handle the sequence increment as transactional or not?
+ *
+ * If the sequence was created in a still-running transaction, treat
+ * it as transactional and queue the increments. Otherwise it needs
+ * to be treated as non-transactional, in which case we send it to
+ * the plugin right away.
+ */
+ transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
+ target_node,
+ xlrec->created);
+
+ /* Skip the change if already processed (per the snapshot). */
+ if (transactional &&
+ !SnapBuildProcessChange(builder, xid, buf->origptr))
+ return;
+ else if (!transactional &&
+ (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
+ SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ return;
+
+ /* Queue the increment (or send immediately if not transactional). */
+ snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
+ ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
+ origin_id, target_node, transactional,
+ xlrec->created, tuplebuf);
+}
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index aae0ae5b8a..6e7a2b7144 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -73,6 +73,10 @@ static void truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
static void message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
/* streaming callbacks */
static void stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
@@ -90,6 +94,10 @@ static void stream_change_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn
static void stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static void stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[], ReorderBufferChange *change);
@@ -217,6 +225,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->apply_truncate = truncate_cb_wrapper;
ctx->reorder->commit = commit_cb_wrapper;
ctx->reorder->message = message_cb_wrapper;
+ ctx->reorder->sequence = sequence_cb_wrapper;
/*
* To support streaming, we require start/stop/abort/commit/change
@@ -233,6 +242,7 @@ StartupDecodingContext(List *output_plugin_options,
(ctx->callbacks.stream_commit_cb != NULL) ||
(ctx->callbacks.stream_change_cb != NULL) ||
(ctx->callbacks.stream_message_cb != NULL) ||
+ (ctx->callbacks.stream_sequence_cb != NULL) ||
(ctx->callbacks.stream_truncate_cb != NULL);
/*
@@ -250,6 +260,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->stream_commit = stream_commit_cb_wrapper;
ctx->reorder->stream_change = stream_change_cb_wrapper;
ctx->reorder->stream_message = stream_message_cb_wrapper;
+ ctx->reorder->stream_sequence = stream_sequence_cb_wrapper;
ctx->reorder->stream_truncate = stream_truncate_cb_wrapper;
@@ -1202,6 +1213,43 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ if (ctx->callbacks.sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ created, last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr first_lsn)
@@ -1507,6 +1555,47 @@ stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ /* We're only supposed to call this when streaming is supported. */
+ Assert(ctx->streaming);
+
+ /* this callback is optional */
+ if (ctx->callbacks.stream_sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "stream_sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ created, last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[],
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 46e66608cf..434926459f 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -77,6 +77,35 @@
* a bit more memory to the oldest subtransactions, because it's likely
* they are the source for the next sequence of changes.
*
+ * When decoding sequences, we differentiate between a sequences created
+ * in a (running) transaction, and sequences created in other (already
+ * committed) transactions. Changes for sequences created in the same
+ * top-level transaction are treated as "transactional" i.e. just like
+ * any other change from that transaction (and discarded in case of a
+ * rollback). Changes for sequences created earlier are treated as not
+ * transactional - are processed immediately, as if performed outside
+ * any transaction (and thus not rolled back).
+ *
+ * This mixed behavior is necessary - sequences are non-transactional
+ * (e.g. ROLLBACK does not undo the sequence increments). But for new
+ * sequences, we need to handle them in a transactional way, because if
+ * we ever get some DDL support, the sequence won't exist until the
+ * transaction gets applied. So we need to ensure the increments don't
+ * happen until the sequence gets created.
+ *
+ * To differentiate which sequences are "old" and which were created
+ * in a still-running transaction, we track sequences created in running
+ * transactions in a hash table. Each sequence is identified by it's
+ * relfilenode, and at transaction commit we discard all sequences
+ * created in that particular transaction. For each sequence we track
+ * the XID of the (sub)transaction that created it, and we cleanup the
+ * sequences for each subtransaction when it completes (commit/rollback).
+ *
+ * We don't use the XID to check if it's the same top-level transaction.
+ * It's enough to know it was created in an in-progress transaction,
+ * and we know it must be the current one because otherwise it wouldn't
+ * see the sequence object.
+ *
* -------------------------------------------------------------------------
*/
#include "postgres.h"
@@ -116,6 +145,13 @@ typedef struct ReorderBufferTXNByIdEnt
ReorderBufferTXN *txn;
} ReorderBufferTXNByIdEnt;
+/* entry for hash table we use to track sequences created in running xacts */
+typedef struct ReorderBufferSequenceEnt
+{
+ RelFileNode rnode;
+ TransactionId xid;
+} ReorderBufferSequenceEnt;
+
/* data structures for (relfilenode, ctid) => (cmin, cmax) mapping */
typedef struct ReorderBufferTupleCidKey
{
@@ -339,6 +375,14 @@ ReorderBufferAllocate(void)
buffer->by_txn = hash_create("ReorderBufferByXid", 1000, &hash_ctl,
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+ /* hash table of sequences, mapping relfilenode to XID of transaction */
+ hash_ctl.keysize = sizeof(RelFileNode);
+ hash_ctl.entrysize = sizeof(ReorderBufferSequenceEnt);
+ hash_ctl.hcxt = buffer->context;
+
+ buffer->sequences = hash_create("ReorderBufferSequenceHash", 1000, &hash_ctl,
+ HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+
buffer->by_txn_last_xid = InvalidTransactionId;
buffer->by_txn_last_txn = NULL;
@@ -525,6 +569,13 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
change->data.truncate.relids = NULL;
}
break;
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
+ change->data.sequence.tuple = NULL;
+ }
+ break;
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
@@ -859,6 +910,242 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
}
}
+/*
+ * Treat the sequence increment as transactional?
+ *
+ * The hash table tracks all sequences created in in-progress transactions,
+ * so we simply do a lookup (the sequence is identified by relfilende). If
+ * we find a match, the increment should be handled as transactional.
+ */
+bool
+ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created)
+{
+ bool found = false;
+
+ if (created)
+ return true;
+
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND,
+ &found);
+
+ return found;
+}
+
+/*
+ * Cleanup sequences created in in-progress transactions.
+ *
+ * There's no way to search by XID, so we simply do a seqscan of all
+ * the entries in the hash table. Hopefully there are only a couple
+ * entries in most cases - people generally don't create many new
+ * sequences over and over.
+ */
+static void
+ReorderBufferSequenceCleanup(ReorderBuffer *rb, TransactionId xid)
+{
+ HASH_SEQ_STATUS scan_status;
+ ReorderBufferSequenceEnt *ent;
+
+ hash_seq_init(&scan_status, rb->sequences);
+ while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
+ {
+ /* skip sequences not from this transaction */
+ if (ent->xid != xid)
+ continue;
+
+ (void) hash_search(rb->sequences,
+ (void *) &(ent->rnode),
+ HASH_REMOVE, NULL);
+ }
+}
+
+/*
+ * A transactional sequence increment is queued to be processed upon commit
+ * and a non-transactional increment gets processed immediately.
+ *
+ * A sequence update may be both transactional and non-transactional. When
+ * created in a running transaction, treat it as transactional and queue
+ * the change in it. Otherwise treat it as non-transactional, so that we
+ * don't forget the increment in case of a rollback.
+ */
+void
+ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf)
+{
+ /*
+ * Change needs to be handled as transactional, because the sequence was
+ * created in a transaction that is still running. In that case all the
+ * changes need to be queued in that transaction, we must not send them
+ * to the downstream until the transaction commits.
+ *
+ * There's a bit of a trouble with subtransactions - we can't queue it
+ * into the subxact, because it might be rolled back and we'd lose the
+ * increment. We need to queue it into the same (sub)xact that created
+ * the sequence, which is why we track the XID in the hash table.
+ */
+ if (transactional)
+ {
+ MemoryContext oldcontext;
+ ReorderBufferChange *change;
+
+ /* lookup sequence by relfilenode */
+ ReorderBufferSequenceEnt *ent;
+ bool found;
+
+ /* transactional changes require a transaction */
+ Assert(xid != InvalidTransactionId);
+
+ /* search the lookup table (we ignore the return value, found is enough) */
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ created ? HASH_ENTER : HASH_FIND,
+ &found);
+
+ /*
+ * If this is the "create" increment, we must not have found any
+ * pre-existing entry in the hash table (i.e. there must not be
+ * any conflicting sequence).
+ */
+ Assert(!(created && found));
+
+ /* But we must have either created or found an existing entry. */
+ Assert(created || found);
+
+ /*
+ * When creating the sequence, remember the XID of the transaction
+ * that created id.
+ */
+ if (created)
+ ent->xid = xid;
+
+ /* XXX Maybe check that we're still in the same top-level xact? */
+
+ /* OK, allocate and queue the change */
+ oldcontext = MemoryContextSwitchTo(rb->context);
+
+ change = ReorderBufferGetChange(rb);
+
+ change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
+ change->origin_id = origin_id;
+
+ memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
+
+ change->data.sequence.created = created;
+ change->data.sequence.tuple = tuplebuf;
+
+ /* add it to the same subxact that created the sequence */
+ ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+ else
+ {
+ /*
+ * This increment is for a sequence that was not created in any
+ * running transaction, so we treat it as non-transactional and
+ * just send it to the output plugin directly.
+ */
+ ReorderBufferTXN *txn = NULL;
+ volatile Snapshot snapshot_now = snapshot;
+ bool using_subtxn;
+
+#ifdef USE_ASSERT_CHECKING
+ /* All "creates" have to be handled as transactional. */
+ Assert(!created);
+
+ /* Make sure the sequence is not in the hash table. */
+ {
+ bool found;
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND, &found);
+ Assert(!found);
+ }
+#endif
+
+ if (xid != InvalidTransactionId)
+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
+
+ /* setup snapshot to allow catalog access */
+ SetupHistoricSnapshot(snapshot_now, NULL);
+
+ /*
+ * Decoding needs access to syscaches et al., which in turn use
+ * heavyweight locks and such. Thus we need to have enough state around to
+ * keep track of those. The easiest way is to simply use a transaction
+ * internally. That also allows us to easily enforce that nothing writes
+ * to the database by checking for xid assignments.
+ *
+ * When we're called via the SQL SRF there's already a transaction
+ * started, so start an explicit subtransaction there.
+ */
+ using_subtxn = IsTransactionOrTransactionBlock();
+
+ PG_TRY();
+ {
+ Relation relation;
+ HeapTuple tuple;
+ bool isnull;
+ int64 last_value, log_cnt;
+ bool is_called;
+ Oid reloid;
+
+ if (using_subtxn)
+ BeginInternalSubTransaction("sequence");
+ else
+ StartTransactionCommand();
+
+ reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(rnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+ tuple = &tuplebuf->tuple;
+
+ /*
+ * Extract the internal sequence values, describing the state.
+ *
+ * XXX Seems a bit strange to access it directly. Maybe there's
+ * a better / more correct way?
+ */
+ last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
+ log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
+ is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+
+ rb->sequence(rb, txn, lsn, relation, transactional, created,
+ last_value, log_cnt, is_called);
+
+ RelationClose(relation);
+
+ TeardownHistoricSnapshot(false);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+ }
+ PG_CATCH();
+ {
+ TeardownHistoricSnapshot(true);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+ }
+}
+
/*
* AssertTXNLsnOrder
* Verify LSN ordering of transaction lists in the reorderbuffer
@@ -1535,6 +1822,9 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
&found);
Assert(found);
+ /* Remove sequences created in this transaction (if any). */
+ ReorderBufferSequenceCleanup(rb, txn->xid);
+
/* remove entries spilled to disk */
if (rbtxn_is_serialized(txn))
ReorderBufferRestoreCleanup(rb, txn);
@@ -1950,6 +2240,42 @@ ReorderBufferApplyMessage(ReorderBuffer *rb, ReorderBufferTXN *txn,
change->data.msg.message);
}
+/*
+ * Helper function for ReorderBufferProcessTXN for applying sequences.
+ */
+static inline void
+ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ Relation relation, ReorderBufferChange *change,
+ bool streaming)
+{
+ HeapTuple tuple;
+ bool isnull;
+ int64 last_value, log_cnt;
+ bool is_called;
+
+ tuple = &change->data.sequence.tuple->tuple;
+
+ /*
+ * Extract the internal sequence values, describing the state.
+ *
+ * XXX Seems a bit strange to access it directly. Maybe there's
+ * a better / more correct way?
+ */
+ last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
+ log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
+ is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+
+ /* Only ever called from ReorderBufferApplySequence, so transational. */
+ if (streaming)
+ rb->stream_sequence(rb, txn, change->lsn, relation, true,
+ change->data.sequence.created,
+ last_value, log_cnt, is_called);
+ else
+ rb->sequence(rb, txn, change->lsn, relation, true,
+ change->data.sequence.created,
+ last_value, log_cnt, is_called);
+}
+
/*
* Function to store the command id and snapshot at the end of the current
* stream so that we can reuse the same while sending the next stream.
@@ -2392,6 +2718,31 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
elog(ERROR, "tuplecid value in changequeue");
break;
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ Assert(snapshot_now);
+
+ reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
+ change->data.sequence.relnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+
+ if (!RelationIsValid(relation))
+ elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
+ reloid,
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ if (RelationIsLogicallyLogged(relation))
+ ReorderBufferApplySequence(rb, txn, relation, change, streaming);
+
+ RelationClose(relation);
+ break;
}
}
@@ -3776,6 +4127,39 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ char *data;
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
+ /* make sure we have enough space */
+ ReorderBufferSerializeReserve(rb, sz);
+
+ data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
+ /* might have been reallocated above */
+ ondisk = (ReorderBufferDiskChange *) rb->outbuf;
+
+ if (len)
+ {
+ memcpy(data, &tup->tuple, sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ memcpy(data, tup->tuple.t_data, len);
+ data += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4040,6 +4424,22 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4341,6 +4741,30 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ uint32 tuplelen = ((HeapTuple) data)->t_len;
+
+ change->data.sequence.tuple =
+ ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
+
+ /* restore ->tuple */
+ memcpy(&change->data.sequence.tuple->tuple, data,
+ sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ /* reset t_data pointer into the new tuplebuf */
+ change->data.sequence.tuple->tuple.t_data =
+ ReorderBufferTupleBufData(change->data.sequence.tuple);
+
+ /* restore tuple data itself */
+ memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
+ data += tuplelen;
+ }
+ break;
+
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 40544dd4c7..5919fb90ee 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -48,6 +48,7 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
typedef struct xl_seq_rec
{
RelFileNode node;
+ bool created; /* is this a CREATE SEQUENCE */
/* SEQUENCE TUPLE DATA FOLLOWS AT THE END */
} xl_seq_rec;
diff --git a/src/include/replication/output_plugin.h b/src/include/replication/output_plugin.h
index 810495ed0e..57bc13f11c 100644
--- a/src/include/replication/output_plugin.h
+++ b/src/include/replication/output_plugin.h
@@ -88,6 +88,19 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
Size message_size,
const char *message);
+/*
+ * Called for the generic logical decoding sequences.
+ */
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ bool created,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Filter changes by origin.
*/
@@ -199,6 +212,20 @@ typedef void (*LogicalDecodeStreamMessageCB) (struct LogicalDecodingContext *ctx
Size message_size,
const char *message);
+/*
+ * Called for the streaming generic logical decoding sequences from in-progress
+ * transactions.
+ */
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ bool created,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Callback for streaming truncates from in-progress transactions.
*/
@@ -219,6 +246,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
@@ -237,6 +265,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index 5b40ff75f7..63e6ed037b 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -64,7 +64,8 @@ enum ReorderBufferChangeType
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT,
- REORDER_BUFFER_CHANGE_TRUNCATE
+ REORDER_BUFFER_CHANGE_TRUNCATE,
+ REORDER_BUFFER_CHANGE_SEQUENCE
};
/* forward declaration */
@@ -158,6 +159,14 @@ typedef struct ReorderBufferChange
uint32 ninvalidations; /* Number of messages */
SharedInvalidationMessage *invalidations; /* invalidation message */
} inval;
+
+ /* Context data for Sequence changes */
+ struct
+ {
+ RelFileNode relnode;
+ bool created;
+ ReorderBufferTupleBuf *tuple;
+ } sequence;
} data;
/*
@@ -430,6 +439,15 @@ typedef void (*ReorderBufferMessageCB) (ReorderBuffer *rb,
const char *prefix, Size sz,
const char *message);
+/* sequence callback signature */
+typedef void (*ReorderBufferSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* begin prepare callback signature */
typedef void (*ReorderBufferBeginPrepareCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn);
@@ -496,6 +514,15 @@ typedef void (*ReorderBufferStreamMessageCB) (
const char *prefix, Size sz,
const char *message);
+/* stream sequence callback signature */
+typedef void (*ReorderBufferStreamSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* stream truncate callback signature */
typedef void (*ReorderBufferStreamTruncateCB) (
ReorderBuffer *rb,
@@ -511,6 +538,12 @@ struct ReorderBuffer
*/
HTAB *by_txn;
+ /*
+ * relfilenode => XID lookup table for sequences created in a transaction
+ * (also includes altered sequences, which assigns new relfilenode)
+ */
+ HTAB *sequences;
+
/*
* Transactions that could be a toplevel xact, ordered by LSN of the first
* record bearing that xid.
@@ -541,6 +574,7 @@ struct ReorderBuffer
ReorderBufferApplyTruncateCB apply_truncate;
ReorderBufferCommitCB commit;
ReorderBufferMessageCB message;
+ ReorderBufferSequenceCB sequence;
/*
* Callbacks to be called when streaming a transaction at prepare time.
@@ -560,6 +594,7 @@ struct ReorderBuffer
ReorderBufferStreamCommitCB stream_commit;
ReorderBufferStreamChangeCB stream_change;
ReorderBufferStreamMessageCB stream_message;
+ ReorderBufferStreamSequenceCB stream_sequence;
ReorderBufferStreamTruncateCB stream_truncate;
/*
@@ -635,6 +670,10 @@ void ReorderBufferQueueChange(ReorderBuffer *, TransactionId,
void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapshot, XLogRecPtr lsn,
bool transactional, const char *prefix,
Size message_size, const char *message);
+void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
@@ -682,4 +721,7 @@ void ReorderBufferSetRestartPoint(ReorderBuffer *, XLogRecPtr ptr);
void StartupReorderBuffer(void);
+bool ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created);
+
#endif
--
2.31.1
0002-Add-support-for-decoding-sequences-to-test_-20210924.patchtext/x-patch; charset=UTF-8; name=0002-Add-support-for-decoding-sequences-to-test_-20210924.patchDownload
From e8f96eafc45a91b6d21aa1c46b6eaed547d7a89d Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 24 Sep 2021 00:42:04 +0200
Subject: [PATCH 2/3] Add support for decoding sequences to test_decoding
---
contrib/test_decoding/Makefile | 3 +-
contrib/test_decoding/expected/sequence.out | 327 ++++++++++++++++++++
contrib/test_decoding/sql/sequence.sql | 119 +++++++
contrib/test_decoding/test_decoding.c | 69 +++++
4 files changed, 517 insertions(+), 1 deletion(-)
create mode 100644 contrib/test_decoding/expected/sequence.out
create mode 100644 contrib/test_decoding/sql/sequence.sql
diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index 9a31e0b879..56ddc3abae 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -5,7 +5,8 @@ PGFILEDESC = "test_decoding - example of a logical decoding output plugin"
REGRESS = ddl xact rewrite toast permissions decoding_in_xact \
decoding_into_rel binary prepared replorigin time messages \
- spill slot truncate stream stats twophase twophase_stream
+ spill slot truncate stream stats twophase twophase_stream \
+ sequence
ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \
oldest_xmin snapshot_transfer subxact_without_top concurrent_stream \
twophase_snapshot
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
new file mode 100644
index 0000000000..beecc3e0c8
--- /dev/null
+++ b/contrib/test_decoding/expected/sequence.out
@@ -0,0 +1,327 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+ ?column?
+----------
+ init
+(1 row)
+
+CREATE SEQUENCE test_sequence;
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4
+(1 row)
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 14
+(1 row)
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4000
+(1 row)
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, true);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, false);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3500
+(1 row)
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:1 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 created:0 last_value:33 log_cnt:0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:4 log_cnt:0 is_called:1
+ COMMIT
+ sequence public.test_sequence: transactional:0 created:0 last_value:334 log_cnt:0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:14 log_cnt:32 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:14 log_cnt:0 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:4000 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 created:0 last_value:4320 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3500 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3500 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3500 log_cnt:0 is_called:0
+ sequence public.test_sequence: transactional:0 created:0 last_value:3820 log_cnt:0 is_called:1
+(24 rows)
+
+DROP SEQUENCE test_sequence;
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3002
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3003
+(1 row)
+
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 6001
+(1 row)
+
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3003 | 30 | t
+(1 row)
+
+SELECT nextval('test_table_a_seq');
+ nextval
+---------
+ 3004
+(1 row)
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+---------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 created:1 last_value:1 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_table_a_seq: transactional:0 created:0 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 created:0 last_value:3000 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 created:0 last_value:3033 log_cnt:0 is_called:1
+ BEGIN
+ COMMIT
+(8 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+-------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 created:1 last_value:1 log_cnt:0 is_called:0
+ sequence public.test_table_a_seq: transactional:1 created:0 last_value:33 log_cnt:0 is_called:1
+ table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ COMMIT
+(6 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ setval
+--------
+ 5000
+(1 row)
+
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3001 | 32 | t
+(1 row)
+
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:1 log_cnt:0 is_called:0
+ sequence public.test_sequence: transactional:1 created:0 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 created:0 last_value:3000 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 created:0 last_value:3033 log_cnt:0 is_called:1
+ COMMIT
+(6 rows)
+
+SELECT pg_drop_replication_slot('regression_slot');
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
new file mode 100644
index 0000000000..d8a34738f3
--- /dev/null
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -0,0 +1,119 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+
+CREATE SEQUENCE test_sequence;
+
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, true);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, false);
+SELECT nextval('test_sequence');
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+DROP SEQUENCE test_sequence;
+
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+SELECT nextval('test_table_a_seq');
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index e5cd84e85e..45765d299e 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -35,6 +35,7 @@ typedef struct
bool include_timestamp;
bool skip_empty_xacts;
bool only_local;
+ bool skip_sequences;
} TestDecodingData;
/*
@@ -76,6 +77,10 @@ static void pg_decode_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
TransactionId xid,
const char *gid);
@@ -116,6 +121,10 @@ static void pg_decode_stream_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_stream_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static void pg_decode_stream_truncate(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
int nrelations, Relation relations[],
@@ -141,6 +150,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->filter_by_origin_cb = pg_decode_filter;
cb->shutdown_cb = pg_decode_shutdown;
cb->message_cb = pg_decode_message;
+ cb->sequence_cb = pg_decode_sequence;
cb->filter_prepare_cb = pg_decode_filter_prepare;
cb->begin_prepare_cb = pg_decode_begin_prepare_txn;
cb->prepare_cb = pg_decode_prepare_txn;
@@ -153,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pg_decode_stream_commit;
cb->stream_change_cb = pg_decode_stream_change;
cb->stream_message_cb = pg_decode_stream_message;
+ cb->stream_sequence_cb = pg_decode_stream_sequence;
cb->stream_truncate_cb = pg_decode_stream_truncate;
}
@@ -175,6 +186,9 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
data->skip_empty_xacts = false;
data->only_local = false;
+ /* skip sequences by default for backwards compatibility */
+ data->skip_sequences = true;
+
ctx->output_plugin_private = data;
opt->output_type = OUTPUT_PLUGIN_TEXTUAL_OUTPUT;
@@ -265,6 +279,17 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "skip-sequences") == 0)
+ {
+
+ if (elem->arg == NULL)
+ continue; /* true by default */
+ else if (!parse_bool(strVal(elem->arg), &data->skip_sequences))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not parse value \"%s\" for parameter \"%s\"",
+ strVal(elem->arg), elem->defname)));
+ }
else
{
ereport(ERROR,
@@ -744,6 +769,28 @@ pg_decode_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d created:%d last_value:%zu log_cnt:%zu is_called:%d",
+ transactional, created, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
static void
pg_decode_stream_start(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn)
@@ -943,6 +990,28 @@ pg_decode_stream_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_stream_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "streaming sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d created:%d last_value:%zu log_cnt:%zu is_called:%d",
+ transactional, created, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* In streaming mode, we don't display the detailed information of Truncate.
* See pg_decode_stream_change.
--
2.31.1
0003-Add-support-for-decoding-sequences-to-built-20210924.patchtext/x-patch; charset=UTF-8; name=0003-Add-support-for-decoding-sequences-to-built-20210924.patchDownload
From d50a5ca52bce39afe0ca88a5c02703d7778db5be Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 24 Sep 2021 00:43:22 +0200
Subject: [PATCH 3/3] Add support for decoding sequences to built-in
replication
---
doc/src/sgml/catalogs.sgml | 71 +++++
doc/src/sgml/ref/alter_publication.sgml | 28 +-
doc/src/sgml/ref/alter_subscription.sgml | 4 +-
src/backend/catalog/pg_publication.c | 149 ++++++++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 237 ++++++++++++++++-
src/backend/commands/sequence.c | 79 ++++++
src/backend/commands/subscriptioncmds.c | 272 ++++++++++++++++++++
src/backend/executor/execReplication.c | 2 +-
src/backend/nodes/copyfuncs.c | 4 +-
src/backend/nodes/equalfuncs.c | 4 +-
src/backend/parser/gram.y | 43 +++-
src/backend/replication/logical/proto.c | 54 ++++
src/backend/replication/logical/tablesync.c | 118 ++++++++-
src/backend/replication/logical/worker.c | 60 +++++
src/backend/replication/pgoutput/pgoutput.c | 86 ++++++-
src/backend/utils/cache/relcache.c | 4 +-
src/bin/psql/tab-complete.c | 2 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 14 +
src/include/commands/sequence.h | 1 +
src/include/nodes/parsenodes.h | 4 +-
src/include/replication/logicalproto.h | 20 ++
src/include/replication/pgoutput.h | 1 +
src/test/regress/expected/rules.out | 8 +
src/test/subscription/t/023_sequences.pl | 196 ++++++++++++++
26 files changed, 1451 insertions(+), 25 deletions(-)
create mode 100644 src/test/subscription/t/023_sequences.pl
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 2f0def9b19..1287804cf8 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -9430,6 +9430,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11264,6 +11269,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index faa114b2c6..c68a1573de 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -24,6 +24,9 @@ PostgreSQL documentation
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> ADD TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> DROP TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> ADD SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
+ALTER PUBLICATION <replaceable class="parameter">name</replaceable> DROP SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ...]
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> OWNER TO { <replaceable>new_owner</replaceable> | CURRENT_ROLE | CURRENT_USER | SESSION_USER }
ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <replaceable>new_name</replaceable>
@@ -50,7 +53,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -62,7 +76,8 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<para>
You must own the publication to use <command>ALTER PUBLICATION</command>.
- Adding a table to a publication additionally requires owning that table.
+ Adding a table to a publication additionally requires owning that table,
+ and the same requirement applies to sequences.
To alter the owner, you must also be a direct or indirect member of the new
owning role. The new owner must have <literal>CREATE</literal> privilege on
the database. Also, the new owner of a <literal>FOR ALL TABLES</literal>
@@ -97,6 +112,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index bec5e9c483..1f9cf6e649 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -146,7 +146,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ replication of tables and sequences that were added to the subscribed-to publications
since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -163,7 +163,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
Specifies whether to copy pre-existing data in the publications
that are being subscribed to when the replication starts.
The default is <literal>true</literal>. (Previously-subscribed
- tables are not copied.)
+ tables and sequences are not copied.)
</para>
</listitem>
</varlistentry>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 9cd0c82f93..0502df1454 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -52,7 +52,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -99,7 +100,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -318,6 +320,11 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ /* skip sequences here */
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ continue;
+
result = GetPubPartitionOptionRelations(result, pub_partopt,
pubrel->prrelid);
}
@@ -328,6 +335,49 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of relation oids for a publication (sequences only).
+ *
+ * This should only be used for normal publications, the FOR ALL TABLES
+ * should use GetAllSequencesPublicationRelations().
+ */
+List *
+GetPublicationSequenceRelations(Oid pubid)
+{
+ List *result;
+ Relation pubrelsrel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications associated with the relation. */
+ pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_rel_prpubid,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(pubid));
+
+ scan = systable_beginscan(pubrelsrel, PublicationRelPrrelidPrpubidIndexId,
+ true, NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Form_pg_publication_rel pubrel;
+
+ pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ }
+
+ systable_endscan(scan);
+ table_close(pubrelsrel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of publication oids for publications marked as FOR ALL TABLES.
*/
@@ -428,6 +478,46 @@ GetAllTablesPublicationRelations(bool pubviaroot)
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -450,10 +540,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -579,3 +671,56 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ sequences = GetPublicationSequenceRelations(publication->oid);
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 55f6e3711d..9bec49bc3e 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -372,6 +372,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPT.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 9c7f91611d..602226caef 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -51,6 +52,12 @@ static void PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt);
static void PublicationDropTables(Oid pubid, List *rels, bool missing_ok);
+static List *OpenSequenceList(List *sequences);
+static void CloseSequenceList(List *rels);
+static void PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt);
+static void PublicationDropSequences(Oid pubid, List *rels, bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
List *options,
@@ -69,6 +76,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -93,6 +101,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -115,6 +124,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -205,6 +216,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -380,9 +393,9 @@ AlterPublicationTables(AlterPublicationStmt *stmt, Relation rel,
rels = OpenTableList(stmt->tables);
- if (stmt->tableAction == DEFELEM_ADD)
+ if (stmt->action == DEFELEM_ADD)
PublicationAddTables(pubid, rels, false, stmt);
- else if (stmt->tableAction == DEFELEM_DROP)
+ else if (stmt->action == DEFELEM_DROP)
PublicationDropTables(pubid, rels, false);
else /* DEFELEM_SET */
{
@@ -440,6 +453,82 @@ AlterPublicationTables(AlterPublicationStmt *stmt, Relation rel,
CloseTableList(rels);
}
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, Relation rel,
+ HeapTuple tup)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ Assert(list_length(stmt->sequences) > 0);
+
+ rels = OpenSequenceList(stmt->sequences);
+
+ if (stmt->action == DEFELEM_ADD)
+ PublicationAddSequences(pubid, rels, false, stmt);
+ else if (stmt->action == DEFELEM_DROP)
+ PublicationDropSequences(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ Relation newrel = (Relation) lfirst(newlc);
+
+ if (RelationGetRelid(newrel) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+
+ if (!found)
+ {
+ Relation oldrel = relation_open(oldrelid,
+ ShareUpdateExclusiveLock);
+
+ delrels = lappend(delrels, oldrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropSequences(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddSequences(pubid, rels, true, stmt);
+
+ CloseSequenceList(delrels);
+ }
+
+ CloseSequenceList(rels);
+}
+
/*
* Alter the existing publication.
*
@@ -473,8 +562,12 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
if (stmt->options)
AlterPublicationOptions(pstate, stmt, rel, tup);
- else
+ else if (stmt->tables)
AlterPublicationTables(stmt, rel, tup);
+ else if (stmt->sequences)
+ AlterPublicationSequences(stmt, rel, tup);
+ else
+ Assert(false);
/* Cleanup. */
heap_freetuple(tup);
@@ -727,6 +820,144 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
}
}
+/*
+ * Open relations specified by a PublicationTable list.
+ * In the returned list of PublicationRelInfo, tables are locked
+ * in ShareUpdateExclusiveLock mode in order to add them to a publication.
+ */
+static List *
+OpenSequenceList(List *sequences)
+{
+ List *relids = NIL;
+ List *rels = NIL;
+ ListCell *lc;
+
+ /*
+ * Open, share-lock, and check all the explicitly-specified relations
+ */
+ foreach(lc, sequences)
+ {
+ PublicationTable *s = lfirst_node(PublicationTable, lc);
+ Relation rel;
+ Oid myrelid;
+ PublicationRelInfo *pub_rel;
+
+ /* Allow query cancel in case this takes a long time */
+ CHECK_FOR_INTERRUPTS();
+
+ rel = table_openrv(s->relation, ShareUpdateExclusiveLock);
+ myrelid = RelationGetRelid(rel);
+
+ /*
+ * Filter out duplicates if user specifies "foo, foo".
+ *
+ * Note that this algorithm is known to not be very efficient (O(N^2))
+ * but given that it only works on list of tables given to us by user
+ * it's deemed acceptable.
+ */
+ if (list_member_oid(relids, myrelid))
+ {
+ table_close(rel, ShareUpdateExclusiveLock);
+ continue;
+ }
+
+ pub_rel = palloc(sizeof(PublicationRelInfo));
+ pub_rel->relation = rel;
+ rels = lappend(rels, pub_rel);
+ relids = lappend_oid(relids, myrelid);
+ }
+
+ list_free(relids);
+
+ return rels;
+}
+
+/*
+ * Close all relations in the list.
+ */
+static void
+CloseSequenceList(List *rels)
+{
+ ListCell *lc;
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel;
+
+ pub_rel = (PublicationRelInfo *) lfirst(lc);
+ table_close(pub_rel->relation, NoLock);
+ }
+}
+
+/*
+ * Add listed tables to the publication.
+ */
+static void
+PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt)
+{
+ ListCell *lc;
+
+ Assert(!stmt || !stmt->for_all_sequences);
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel = (PublicationRelInfo *) lfirst(lc);
+ Relation rel = pub_rel->relation;
+ ObjectAddress obj;
+
+ /* Must be owner of the sequence or superuser. */
+ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind),
+ RelationGetRelationName(rel));
+
+ obj = publication_add_relation(pubid, pub_rel, if_not_exists);
+ if (stmt)
+ {
+ EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
+ (Node *) stmt);
+
+ InvokeObjectPostCreateHook(PublicationRelRelationId,
+ obj.objectId, 0);
+ }
+ }
+}
+
+/*
+ * Remove listed sequences from the publication.
+ */
+static void
+PublicationDropSequences(Oid pubid, List *rels, bool missing_ok)
+{
+ ObjectAddress obj;
+ ListCell *lc;
+ Oid prid;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ Oid relid = RelationGetRelid(rel);
+
+ prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
+ ObjectIdGetDatum(relid),
+ ObjectIdGetDatum(pubid));
+ if (!OidIsValid(prid))
+ {
+ if (missing_ok)
+ continue;
+
+ ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+ errmsg("relation \"%s\" is not part of the publication",
+ RelationGetRelationName(rel))));
+ }
+
+ ObjectAddressSet(obj, PublicationRelRelationId, prid);
+ performDeletion(&obj, DROP_CASCADE, 0);
+ }
+}
+
+
/*
* Internal workhorse for changing a publication owner
*/
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a98fcc2e97..93a213919d 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -336,6 +336,85 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple, false);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*/
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index c47ba26369..5086526904 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -84,6 +84,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -502,6 +503,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -540,6 +542,26 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -712,6 +734,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -803,6 +829,183 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1607,6 +1810,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 574d7d27fd..6f0d42d551 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -608,7 +608,7 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 228387eaee..39b40646a5 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4831,8 +4831,10 @@ _copyAlterPublicationStmt(const AlterPublicationStmt *from)
COPY_STRING_FIELD(pubname);
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(tables);
+ COPY_NODE_FIELD(sequences);
COPY_SCALAR_FIELD(for_all_tables);
- COPY_SCALAR_FIELD(tableAction);
+ COPY_SCALAR_FIELD(for_all_sequences);
+ COPY_SCALAR_FIELD(action);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 800f588b5c..77c906f615 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2315,8 +2315,10 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_STRING_FIELD(pubname);
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(tables);
+ COMPARE_NODE_FIELD(sequences);
COMPARE_SCALAR_FIELD(for_all_tables);
- COMPARE_SCALAR_FIELD(tableAction);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
+ COMPARE_SCALAR_FIELD(action);
return true;
}
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index e3068a374e..a686d897c8 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -9596,6 +9596,7 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
*****************************************************************************/
CreatePublicationStmt:
+ /* CREATE PUBLICATION name opt_publication_for_tables opt_publication_for_sequences opt_definition */
CREATE PUBLICATION name opt_publication_for_tables opt_definition
{
CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
@@ -9670,7 +9671,7 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_ADD;
+ n->action = DEFELEM_ADD;
$$ = (Node *)n;
}
| ALTER PUBLICATION name SET TABLE publication_table_list
@@ -9678,7 +9679,7 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_SET;
+ n->action = DEFELEM_SET;
$$ = (Node *)n;
}
| ALTER PUBLICATION name DROP TABLE publication_table_list
@@ -9686,7 +9687,31 @@ AlterPublicationStmt:
AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
n->pubname = $3;
n->tables = $6;
- n->tableAction = DEFELEM_DROP;
+ n->action = DEFELEM_DROP;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name ADD_P SEQUENCE publication_table_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_ADD;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name SET SEQUENCE publication_table_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_SET;
+ $$ = (Node *)n;
+ }
+ | ALTER PUBLICATION name DROP SEQUENCE publication_table_list
+ {
+ AlterPublicationStmt *n = makeNode(AlterPublicationStmt);
+ n->pubname = $3;
+ n->sequences = $6;
+ n->action = DEFELEM_DROP;
$$ = (Node *)n;
}
;
@@ -9935,6 +9960,12 @@ UnlistenStmt:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
@@ -9943,6 +9974,12 @@ UnlistenStmt:
* BEGIN / COMMIT / ROLLBACK
* (also older versions END / ABORT)
*
+ * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]
+ *
*****************************************************************************/
TransactionStmt:
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 9f5bf4b639..9f53b4c47b 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -647,6 +647,58 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint8(out, created);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->created = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1203,6 +1255,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index f07983a43c..24369a1522 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -357,6 +358,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -871,6 +878,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ elog(WARNING, "sequence %s info last_value %ld log_cnt %ld is_called %d",
+ quote_qualified_identifier(lrel.nspname, lrel.relname),
+ last_value, log_cnt, is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1106,10 +1206,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 8d96c926b4..5ffd513eae 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -144,6 +144,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -1091,6 +1092,61 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by ResetSequence2 */
+ elog(WARNING, "locking sequence %d in exclusive mode", relid);
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ elog(WARNING, "applying sequence %s.%s transactional %d created %d last_value %ld log_cnt %ld is_called %d",
+ seq.nspname, seq.seqname, seq.transactional, seq.created, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2371,6 +2427,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 14d737fd93..287d60f91f 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -49,6 +49,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -159,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -175,6 +180,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -188,6 +194,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -195,6 +202,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -260,6 +268,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -856,6 +874,52 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(rel))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, RelationGetRelid(rel));
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ rel,
+ xid,
+ sequence_lsn,
+ transactional,
+ created,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1137,7 +1201,8 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->streamed_txns = NIL;
entry->replicate_valid = false;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->publish_as_relid = InvalidOid;
entry->map = NULL; /* will be set by maybe_send_schema() if
* needed */
@@ -1149,6 +1214,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
List *pubids = GetRelationPublications(relid);
ListCell *lc;
Oid publish_as_relid = relid;
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1172,12 +1238,23 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1228,10 +1305,12 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
}
if (entry->pubactions.pubinsert && entry->pubactions.pubupdate &&
- entry->pubactions.pubdelete && entry->pubactions.pubtruncate)
+ entry->pubactions.pubdelete && entry->pubactions.pubtruncate &&
+ entry->pubactions.pubsequence)
break;
}
@@ -1376,6 +1455,7 @@ rel_sync_cache_publication_cb(Datum arg, int cacheid, uint32 hashvalue)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
}
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 13d9994af3..3dd39054e6 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5495,6 +5495,7 @@ GetRelationPublicationActions(Relation relation)
pubactions->pubupdate |= pubform->pubupdate;
pubactions->pubdelete |= pubform->pubdelete;
pubactions->pubtruncate |= pubform->pubtruncate;
+ pubactions->pubsequence |= pubform->pubsequence;
ReleaseSysCache(tup);
@@ -5503,7 +5504,8 @@ GetRelationPublicationActions(Relation relation)
* other publications.
*/
if (pubactions->pubinsert && pubactions->pubupdate &&
- pubactions->pubdelete && pubactions->pubtruncate)
+ pubactions->pubdelete && pubactions->pubtruncate &&
+ pubactions->pubsequence)
break;
}
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 5cd5838668..875882177d 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1644,7 +1644,7 @@ psql_completion(const char *text, int start, int end)
/* ALTER PUBLICATION <name> */
else if (Matches("ALTER", "PUBLICATION", MatchAny))
- COMPLETE_WITH("ADD TABLE", "DROP TABLE", "OWNER TO", "RENAME TO", "SET");
+ COMPLETE_WITH("ADD TABLE", "DROP TABLE", "ADD SEQUENCE", "DROP SEQUENCE", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
COMPLETE_WITH("(", "TABLE");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d068d6532e..476205ae9d 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11509,6 +11509,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 82f2536c65..3ed613b455 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct Publication
@@ -79,6 +89,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -115,6 +126,9 @@ extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
Oid relid);
+extern List *GetAllSequencesPublicationRelations(void);
+extern List *GetPublicationSequenceRelations(Oid pubid);
+
extern bool is_publishable_relation(Relation rel);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *targetrel,
bool if_not_exists);
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 5919fb90ee..c28e8695cb 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3138877553..d00b18ac47 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3661,8 +3661,10 @@ typedef struct AlterPublicationStmt
/* parameters used for ALTER PUBLICATION ... ADD/DROP TABLE */
List *tables; /* List of tables to add/drop */
+ List *sequences; /* List of sequences to add/drop */
bool for_all_tables; /* Special publication for all tables in db */
- DefElemAction tableAction; /* What action to perform with the tables */
+ bool for_all_sequences; /* Special publication for all tables in db */
+ DefElemAction action; /* What action to perform with the tables/sequences */
} AlterPublicationStmt;
typedef struct CreateSubscriptionStmt
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 83741dcf42..d468c3ccfb 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -60,6 +60,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -117,6 +118,19 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ bool created;
+ int64 last_value;
+ int64 log_cnt; /* XXX probably not needed? */
+ bool is_called; /* XXX probably not needed? */
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -227,6 +241,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index 0dc460fb70..97eb2a7ef1 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -28,6 +28,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 2fa00a3c29..b32a1e6729 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1451,6 +1451,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gpt(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gpt.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/subscription/t/023_sequences.pl b/src/test/subscription/t/023_sequences.pl
new file mode 100644
index 0000000000..1f7768acd6
--- /dev/null
+++ b/src/test/subscription/t/023_sequences.pl
@@ -0,0 +1,196 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgresNode;
+use TestLib;
+use Test::More tests => 6;
+
+# Initialize publisher node
+my $node_publisher = PostgresNode->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgresNode->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub WITH (slot_name = seq_sub_slot)"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a rolled-back transaction - should be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the new sequence in a transaction, and roll it back - in this case
+# it should be replicated as the behavior is non-transactional
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+$node_subscriber->stop('fast');
+$node_publisher->stop('fast');
--
2.31.1
Just a note for some design decisions
1) By default, sequences are treated non-transactionally, i.e. sent to the output plugin right away.
If our aim is just to make sure that all user-visible data in
*transactional* tables is consistent with sequence state then one
very much simplified approach to this could be to track the results of
nextval() calls in a transaction at COMMIT put the latest sequence
value in WAL (or just track the sequences affected and put the latest
sequence state in WAL at commit which needs extra read of sequence but
protects against race conditions with parallel transactions which get
rolled back later)
This avoids sending redundant changes for multiple nextval() calls
(like loading a million-row table with sequence-generated id column)
And one can argue that we can safely ignore anything in ROLLBACKED
sequences. This is assuming that even if we did advance the sequence
paste the last value sent by the latest COMMITTED transaction it does
not matter for database consistency.
It can matter if customers just call nextval() in rolled-back
transactions and somehow expect these values to be replicated based on
reasoning along "sequences are not transactional - so rollbacks should
not matter" .
Or we may get away with most in-detail sequence tracking on the source
if we just keep track of the xmin of the sequence and send the
sequence info over at commit if it == current_transaction_id ?
-----
Hannu Krosing
Google Cloud - We have a long list of planned contributions and we are hiring.
Contact me if interested.
On Fri, Sep 24, 2021 at 9:16 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
Show quoted text
Hi,
On 9/23/21 12:27 PM, Peter Eisentraut wrote:
On 30.07.21 20:26, Tomas Vondra wrote:
Here's a an updated version of this patch - rebased to current master,
and fixing some of the issues raised in Peter's review.This patch needs an update, as various conflicts have arisen now.
As was discussed before, it might be better to present a separate patch
for just the logical decoding part for now, since the replication and
DDL stuff has the potential to conflict heavily with other patches being
discussed right now. It looks like cutting this patch in two should be
doable easily.Attached is the rebased patch, split into three parts:
1) basic decoding infrastructure (decoding, reorderbuffer etc.)
2) support for sequences in test_decoding
3) support for sequences in built-in replication (catalogs, syntax, ...)The last part is the largest one - I'm sure we'll have discussions about
the grammar, adding sequences automatically, etc. But as you said, let's
focus on the first part, which deals with the required decoding stuff.I've added a couple comments, explaining how we track sequences, why we
need the XID in nextval() etc. I've also added streaming support.I looked through the test cases in test_decoding again. It all looks
pretty sensible. If anyone can think of any other tricky or dubious
cases, we can add them there. It's easiest to discuss these things with
concrete test cases rather than in theory.One slightly curious issue is that this can make sequence values go
backwards, when seen by the logical decoding consumer, like in the test
case:+ BEGIN + sequence: public.test_sequence transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0 + COMMIT + sequence: public.test_sequence transactional: 0 created: 0 last_value: 33, log_cnt: 0 is_called: 1 + BEGIN + sequence: public.test_sequence transactional: 1 created: 1 last_value: 4, log_cnt: 0 is_called: 1 + COMMIT + sequence: public.test_sequence transactional: 0 created: 0 last_value: 334, log_cnt: 0 is_called: 1I suppose that's okay, since it's not really the intention that someone
is concurrently consuming sequence values on the subscriber. Maybe
something for the future. Fixing that would require changing the way
transactional sequence DDL updates these values, so it's not directly
the job of the decoding to address this.Yeah, that's due to how ALTER SEQUENCE does things, and I agree redoing
that seems well out of scope for this patch. What seems a bit suspicious
is that some of the ALTER SEQUENCE changes have "created: 1" - it's
probably correct, though, because those ALTER SEQUENCE statements can be
rolled-back, so we see it as if a new sequence is created. The flag name
might be a bit confusing, though.A small thing I found: Maybe the text that test_decoding produces for
sequences can be made to look more consistent with the one for tables.
For example, in+ BEGIN + sequence: public.test_table_a_seq transactional: 1 created: 1 last_value: 1, log_cnt: 0 is_called: 0 + sequence: public.test_table_a_seq transactional: 1 created: 0 last_value: 33, log_cnt: 0 is_called: 1 + table public.test_table: INSERT: a[integer]:1 b[integer]:100 + table public.test_table: INSERT: a[integer]:2 b[integer]:200 + COMMITnote how the punctuation is different.
I did tweak this a bit, hopefully it's more consistent.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 9/25/21 22:05, Hannu Krosing wrote:
Just a note for some design decisions
1) By default, sequences are treated non-transactionally, i.e. sent to the output plugin right away.
If our aim is just to make sure that all user-visible data in
*transactional* tables is consistent with sequence state then one
very much simplified approach to this could be to track the results of
nextval() calls in a transaction at COMMIT put the latest sequence
value in WAL (or just track the sequences affected and put the latest
sequence state in WAL at commit which needs extra read of sequence but
protects against race conditions with parallel transactions which get
rolled back later)
Not sure. TBH I feel rather uneasy about adding more stuff in COMMIT.
This avoids sending redundant changes for multiple nextval() calls
(like loading a million-row table with sequence-generated id column)
Yeah, it'd be nice to have to optimize this a bit, somehow. But I'd bet
it's a negligible amount of data / changes, compared to the table.
And one can argue that we can safely ignore anything in ROLLBACKED
sequences. This is assuming that even if we did advance the sequence
paste the last value sent by the latest COMMITTED transaction it does
not matter for database consistency.
I don't think we can ignore aborted (ROLLBACK) transactions, in the
sense that you can't just discard the increments. Imagine you have this
sequence of transactions:
BEGIN;
SELECT nextval('s'); -- allocates new chunk of values
ROLLBACK;
BEGIN;
SELECT nextval('s'); -- returns one of the cached values
COMMIT;
If you ignore the aborted transaction, then the sequence increment won't
be replicated -- but that's wrong, because user now has a visible
sequence value from that chunk.
So I guess we'd have to maintain a cache of sequences incremented in the
current session, do nothing in aborted transactions (i.e. keep the
contents but don't log anything) and log/reset at commit.
I wonder if multiple sessions make this even more problematic (e.g. due
to session just disconnecting mid transansaction, without writing the
abort record at all). But AFAICS that's not an issue, because the other
session has a separate cache for the sequence.
It can matter if customers just call nextval() in rolled-back
transactions and somehow expect these values to be replicated based on
reasoning along "sequences are not transactional - so rollbacks should
not matter" .
I don't think we guarantee anything for data in transactions that did
not commit, so this seems like a non-issue. I.e. we don't need to go out
of our way to guarantee something we never promised.
Or we may get away with most in-detail sequence tracking on the source
if we just keep track of the xmin of the sequence and send the
sequence info over at commit if it == current_transaction_id ?
Not sure I understand this proposal. Can you explain?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Hi,
I've spent a bit of time exploring the alternative approach outlined by
Hannu, i.e. tracking sequences accessed by the transaction, and logging
the final state just once at COMMIT. Attached is an experimental version
of the patch series doing that - 0001 does the original approach
(decoding the sequence updates from WAL) and then 0002 reworks it to
this alternative solution. The 0003 and 0004 stay mostly the same,
except for minor fixes. Some of the tests in 0003/0004 fail, because
0002 changes the semantics in various ways (more about that later).
The original approach (0001) may seem complex at first, but in principle
it just decodes changes to the sequence relation, and either stashes
them into transaction (just like other changes) or applies them right
away. I'd say that's the most complicated part - deciding whether the
change is transactional or not.
0002 reworks that so that it doesn't decode the existing WAL records,
but tracks sequences which have been modified (updated on-disk state)
and then accessed in the current transaction. And then at COMMIT time we
write a new WAL message with info about the sequence.
I realized we already cache sequences for each session - seqhashtab in
sequence.c. It doesn't have any concept of a transaction, but it seems
fairly easy to make that possible. I did this by adding two flags
- needs_log - means the seesion advanced the sequence (on disk)
- touched - true if the current xact called nextval() etc.
The idea is that what matters is updates to on-disk state, so whenever
we do that we set needs_log. But it only matters when the changes are
made visible in a committed transaction. Consider for example this:
BEGIN;
SELECT nextval('s') FROM generate_series(1,10000) s(i);
ROLLBACK;
SELECT nextval('s');
The first nextval() call certainly sets both flags to true, at least for
default sequences caching 32 values. But the values are not confirmed to
the user because of the rollback - this resets 'touched' flag, but
leaves 'needs_log' set to true.
And then the next nextval() - which may easily be just from cache - sets
touched=true again, and logs the sequence state at (implicit) commit.
Which resets both flags again.
The logging/cleanup happens in AtEOXact_Sequences() which gets called
before commit/abort. This walks all cached sequences and writes the
state for those with both flags true (or resets flag for abort).
The cache also keeps info about the last "sequence state" in the
session, which is then used when writing into into WAL.
To write the sequence state into WAL, I've added a new WAL record
xl_logical_sequence to RM_LOGICALMSG_ID, next to the xl_logical_message.
It's a bit arbitrary, maybe it should be part of RM_SEQ_ID, but it does
the trick. I don't think this is the main issue and it's easy enough to
move it elsewhere if needed.
So, that seems fairly straight-forward and it may reduce the number of
replication messages for large transactions. Unfortunately, it's not
much simpler compared to the first approach - the amount of code is
about the same, and there's a bunch of other issues.
The main issue seems to be about ordering. Consider multiple sessions
all advancing the sequence. With the "old" approach this was naturally
ordered - the order in which the increments were written to WAL made
sense. But the sessions may advance the sequences in one order and then
commit in a different order, which mixes the updates. Consider for
example this scenario with two concurrent transactions:
T1: nextval('s') -> allocates values [1,32]
T2: nextval('s') -> allocates values [33,64]
T2: commit -> logs [33,64]
T1: commit -> logs [1,32]
The result is the sequence on the replica diverted because it replayed
the increments in the opposite order.
I can think of two ways to fix this. Firstly, we could "merge" the
increments in some smart way, e.g. by discarding values considered
"stale" (like decrements). But that seems pretty fragile, because the
sequence may be altered in various ways, reset, etc. And it seems more
like transferring responsibility to someone else instead of actually
solving the issue.
The other fix is simply reading the current sequence state from disk at
commit and logging that (instead of the values cached from the last
increment). But I'm rather skeptical about doing such things right
before COMMIT.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0004-Add-support-for-decoding-sequences-to-built-20211118.patchtext/x-patch; charset=UTF-8; name=0004-Add-support-for-decoding-sequences-to-built-20211118.patchDownload
From 24c30875c9b024b830a03b780fedbcd130395c25 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 19 Nov 2021 01:12:23 +0100
Subject: [PATCH 4/4] Add support for decoding sequences to built-in
replication
---
doc/src/sgml/catalogs.sgml | 71 ++++
doc/src/sgml/ref/alter_publication.sgml | 24 +-
doc/src/sgml/ref/alter_subscription.sgml | 4 +-
src/backend/catalog/pg_publication.c | 149 ++++++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 350 +++++++++++++++++++-
src/backend/commands/sequence.c | 79 +++++
src/backend/commands/subscriptioncmds.c | 272 +++++++++++++++
src/backend/executor/execReplication.c | 2 +-
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 1 +
src/backend/parser/gram.y | 32 ++
src/backend/replication/logical/proto.c | 54 +++
src/backend/replication/logical/tablesync.c | 118 ++++++-
src/backend/replication/logical/worker.c | 60 ++++
src/backend/replication/pgoutput/pgoutput.c | 86 ++++-
src/backend/utils/cache/relcache.c | 4 +-
src/bin/psql/tab-complete.c | 6 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 14 +
src/include/commands/sequence.h | 1 +
src/include/nodes/parsenodes.h | 6 +
src/include/replication/logicalproto.h | 20 ++
src/include/replication/pgoutput.h | 1 +
src/test/regress/expected/rules.out | 8 +
src/test/subscription/t/023_sequences.pl | 196 +++++++++++
26 files changed, 1542 insertions(+), 32 deletions(-)
create mode 100644 src/test/subscription/t/023_sequences.pl
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index c1d11be73f..37b9b52b29 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -9498,6 +9498,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11333,6 +11338,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index bb4ef5e5e2..4d166ad3f9 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCE IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -56,7 +58,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -123,6 +136,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 0b027cc346..8f28cf03f4 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -147,7 +147,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ replication of tables and sequences that were added to the subscribed-to publications
since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -164,7 +164,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
Specifies whether to copy pre-existing data in the publications
that are being subscribed to when the replication starts.
The default is <literal>true</literal>. (Previously-subscribed
- tables are not copied.)
+ tables and sequences are not copied.)
</para>
</listitem>
</varlistentry>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 63579b2f82..2d5bc73a96 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -54,7 +54,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -131,7 +132,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -480,6 +482,11 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ /* skip sequences here */
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ continue;
+
result = GetPubPartitionOptionRelations(result, pub_partopt,
pubrel->prrelid);
}
@@ -490,6 +497,49 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of relation oids for a publication (sequences only).
+ *
+ * This should only be used for normal publications, the FOR ALL TABLES
+ * should use GetAllSequencesPublicationRelations().
+ */
+List *
+GetPublicationSequenceRelations(Oid pubid)
+{
+ List *result;
+ Relation pubrelsrel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications associated with the relation. */
+ pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_rel_prpubid,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(pubid));
+
+ scan = systable_beginscan(pubrelsrel, PublicationRelPrrelidPrpubidIndexId,
+ true, NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Form_pg_publication_rel pubrel;
+
+ pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ }
+
+ systable_endscan(scan);
+ table_close(pubrelsrel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of publication oids for publications marked as FOR ALL TABLES.
*/
@@ -735,6 +785,46 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -757,10 +847,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -915,3 +1007,56 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ sequences = GetPublicationSequenceRelations(publication->oid);
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index eb560955cd..2c12d373c1 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -372,6 +372,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPT.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 7d4a0e95f6..e8dfb1a068 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -59,6 +60,12 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
AlterPublicationStmt *stmt);
static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static List *OpenSequenceList(List *sequences);
+static void CloseSequenceList(List *rels);
+static void PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt);
+static void PublicationDropSequences(Oid pubid, List *rels, bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
List *options,
@@ -77,6 +84,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -101,6 +109,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -123,6 +132,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -149,7 +160,9 @@ parse_publication_options(ParseState *pstate,
*/
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas,
+ List **schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -167,12 +180,23 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLE_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE_IN_SCHEMA:
+ schemaid = get_namespace_oid(pubobj->name, false);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLE_IN_CUR_SCHEMA:
@@ -186,6 +210,21 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
list_free(search_path);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE_IN_CUR_SCHEMA:
+ search_path = fetch_search_path(false);
+ if (search_path == NIL) /* nothing valid in search_path? */
+ ereport(ERROR,
+ errcode(ERRCODE_UNDEFINED_SCHEMA),
+ errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+ schemaid = linitial_oid(search_path);
+ list_free(search_path);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
default:
@@ -251,7 +290,10 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
bool publish_via_partition_root_given;
bool publish_via_partition_root;
AclResult aclresult;
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
/* must have CREATE privilege on database */
@@ -306,6 +348,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -330,26 +374,40 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
}
else
{
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
- if (list_length(schemaidlist) > 0 && !superuser())
+ if (list_length(tables_schemaidlist) > 0 && !superuser())
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES IN SCHEMA publication"));
- if (list_length(relations) > 0)
+ if (list_length(tables) > 0)
{
List *rels;
- rels = OpenTableList(relations);
- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ rels = OpenTableList(tables);
+ CheckObjSchemaNotAlreadyInPublication(rels, tables_schemaidlist,
PUBLICATIONOBJ_TABLE);
PublicationAddTables(puboid, rels, true, NULL);
CloseTableList(rels);
}
+ if (list_length(sequences) > 0)
+ {
+ List *rels;
+
+ rels = OpenSequenceList(sequences);
+ CheckObjSchemaNotAlreadyInPublication(rels, sequences_schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddTables(puboid, rels, true, NULL);
+ CloseSequenceList(rels);
+ }
+
if (list_length(schemaidlist) > 0)
{
/*
@@ -651,12 +709,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
*/
static void
CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
- List *tables, List *schemaidlist)
+ List *tables, List *tables_schemaidlist,
+ List *sequences, List *sequences_schemaidlist)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
if ((stmt->action == DEFELEM_ADD || stmt->action == DEFELEM_SET) &&
- schemaidlist && !superuser())
+ (tables_schemaidlist || sequences_schemaidlist) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to add or set schemas")));
@@ -665,13 +724,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
* Check that user is allowed to manipulate the publication tables in
* schema
*/
- if (schemaidlist && pubform->puballtables)
+ if (tables_schemaidlist && pubform->puballtables)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables from schema cannot be added to, dropped from, or set on FOR ALL TABLES publications.")));
+ /*
+ * Check that user is allowed to manipulate the publication sequences in
+ * schema
+ */
+ if (sequences_schemaidlist && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
/* Check that user is allowed to manipulate the publication tables. */
if (tables && pubform->puballtables)
ereport(ERROR,
@@ -679,6 +749,108 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (sequences && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+ List *sequences, List *schemaidlist)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /*
+ * It is quite possible that for the SET case user has not specified any
+ * tables in which case we need to remove all the existing tables.
+ */
+ if (!sequences && stmt->action != DEFELEM_SET)
+ return;
+
+ rels = OpenSequenceList(sequences);
+
+ if (stmt->action == DEFELEM_ADD)
+ {
+ List *schemas = NIL;
+
+ /*
+ * Check if the relation is member of the existing schema in the
+ * publication or member of the schema list specified.
+ */
+ schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid));
+ CheckObjSchemaNotAlreadyInPublication(rels, schemas,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddSequences(pubid, rels, false, stmt);
+ }
+ else if (stmt->action == DEFELEM_DROP)
+ PublicationDropSequences(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ PublicationRelInfo *newpubrel;
+
+ newpubrel = (PublicationRelInfo *) lfirst(newlc);
+ if (RelationGetRelid(newpubrel->relation) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+ /* Not yet in the list, open it and add to the list */
+ if (!found)
+ {
+ Relation oldrel;
+ PublicationRelInfo *pubrel;
+
+ /* Wrap relation into PublicationRelInfo */
+ oldrel = table_open(oldrelid, ShareUpdateExclusiveLock);
+
+ pubrel = palloc(sizeof(PublicationRelInfo));
+ pubrel->relation = oldrel;
+
+ delrels = lappend(delrels, pubrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropSequences(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddSequences(pubid, rels, true, stmt);
+
+ CloseSequenceList(delrels);
+ }
+
+ CloseSequenceList(rels);
}
/*
@@ -716,13 +888,21 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
AlterPublicationOptions(pstate, stmt, rel, tup);
else
{
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
- CheckAlterPublication(stmt, tup, relations, schemaidlist);
+ CheckAlterPublication(stmt, tup,
+ tables, tables_schemaidlist,
+ sequences, sequences_schemaidlist);
/*
* Lock the publication so nobody else can do anything with it. This
@@ -747,7 +927,9 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
errmsg("publication \"%s\" does not exist",
stmt->pubname));
- AlterPublicationTables(stmt, tup, relations, schemaidlist);
+ AlterPublicationTables(stmt, tup, tables, tables_schemaidlist);
+ AlterPublicationSequences(stmt, tup, sequences, sequences_schemaidlist);
+
AlterPublicationSchemas(stmt, tup, schemaidlist);
}
@@ -1155,6 +1337,144 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
}
}
+/*
+ * Open relations specified by a PublicationTable list.
+ * In the returned list of PublicationRelInfo, tables are locked
+ * in ShareUpdateExclusiveLock mode in order to add them to a publication.
+ */
+static List *
+OpenSequenceList(List *sequences)
+{
+ List *relids = NIL;
+ List *rels = NIL;
+ ListCell *lc;
+
+ /*
+ * Open, share-lock, and check all the explicitly-specified relations
+ */
+ foreach(lc, sequences)
+ {
+ PublicationTable *s = lfirst_node(PublicationTable, lc);
+ Relation rel;
+ Oid myrelid;
+ PublicationRelInfo *pub_rel;
+
+ /* Allow query cancel in case this takes a long time */
+ CHECK_FOR_INTERRUPTS();
+
+ rel = table_openrv(s->relation, ShareUpdateExclusiveLock);
+ myrelid = RelationGetRelid(rel);
+
+ /*
+ * Filter out duplicates if user specifies "foo, foo".
+ *
+ * Note that this algorithm is known to not be very efficient (O(N^2))
+ * but given that it only works on list of tables given to us by user
+ * it's deemed acceptable.
+ */
+ if (list_member_oid(relids, myrelid))
+ {
+ table_close(rel, ShareUpdateExclusiveLock);
+ continue;
+ }
+
+ pub_rel = palloc(sizeof(PublicationRelInfo));
+ pub_rel->relation = rel;
+ rels = lappend(rels, pub_rel);
+ relids = lappend_oid(relids, myrelid);
+ }
+
+ list_free(relids);
+
+ return rels;
+}
+
+/*
+ * Close all relations in the list.
+ */
+static void
+CloseSequenceList(List *rels)
+{
+ ListCell *lc;
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel;
+
+ pub_rel = (PublicationRelInfo *) lfirst(lc);
+ table_close(pub_rel->relation, NoLock);
+ }
+}
+
+/*
+ * Add listed tables to the publication.
+ */
+static void
+PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt)
+{
+ ListCell *lc;
+
+ Assert(!stmt || !stmt->for_all_sequences);
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel = (PublicationRelInfo *) lfirst(lc);
+ Relation rel = pub_rel->relation;
+ ObjectAddress obj;
+
+ /* Must be owner of the sequence or superuser. */
+ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind),
+ RelationGetRelationName(rel));
+
+ obj = publication_add_relation(pubid, pub_rel, if_not_exists);
+ if (stmt)
+ {
+ EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
+ (Node *) stmt);
+
+ InvokeObjectPostCreateHook(PublicationRelRelationId,
+ obj.objectId, 0);
+ }
+ }
+}
+
+/*
+ * Remove listed sequences from the publication.
+ */
+static void
+PublicationDropSequences(Oid pubid, List *rels, bool missing_ok)
+{
+ ObjectAddress obj;
+ ListCell *lc;
+ Oid prid;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ Oid relid = RelationGetRelid(rel);
+
+ prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
+ ObjectIdGetDatum(relid),
+ ObjectIdGetDatum(pubid));
+ if (!OidIsValid(prid))
+ {
+ if (missing_ok)
+ continue;
+
+ ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+ errmsg("relation \"%s\" is not part of the publication",
+ RelationGetRelationName(rel))));
+ }
+
+ ObjectAddressSet(obj, PublicationRelRelationId, prid);
+ performDeletion(&obj, DROP_CASCADE, 0);
+ }
+}
+
+
/*
* Internal workhorse for changing a publication owner
*/
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 92d1966f34..b22f83ea82 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -347,6 +347,85 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*/
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index c47ba26369..5086526904 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -84,6 +84,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -502,6 +503,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -540,6 +542,26 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -712,6 +734,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -803,6 +829,183 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1607,6 +1810,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 574d7d27fd..6f0d42d551 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -608,7 +608,7 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index ad1ea2ff2f..3dc5eeed91 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4843,6 +4843,7 @@ _copyCreatePublicationStmt(const CreatePublicationStmt *from)
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
COPY_SCALAR_FIELD(for_all_tables);
+ COPY_SCALAR_FIELD(for_all_sequences);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index f537d3eb96..2317185daf 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2336,6 +2336,7 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
COMPARE_SCALAR_FIELD(action);
return true;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a6d0cefa6b..5c4e8bf3d7 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -9672,6 +9672,26 @@ PublicationObjSpec:
$$->pubobjtype = PUBLICATIONOBJ_TABLE_IN_CUR_SCHEMA;
$$->location = @5;
}
+ | SEQUENCE relation_expr
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+ $$->pubtable = makeNode(PublicationTable);
+ $$->pubtable->relation = $2;
+ }
+ | ALL SEQUENCE IN_P SCHEMA ColId
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE_IN_SCHEMA;
+ $$->name = $5;
+ $$->location = @5;
+ }
+ | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE_IN_CUR_SCHEMA;
+ $$->location = @5;
+ }
| ColId
{
$$ = makeNode(PublicationObjSpec);
@@ -10007,6 +10027,12 @@ UnlistenStmt:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
@@ -10015,6 +10041,12 @@ UnlistenStmt:
* BEGIN / COMMIT / ROLLBACK
* (also older versions END / ABORT)
*
+ * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]
+ *
*****************************************************************************/
TransactionStmt:
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 9f5bf4b639..9f53b4c47b 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -647,6 +647,58 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint8(out, created);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->created = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1203,6 +1255,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index f07983a43c..24369a1522 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -357,6 +358,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -871,6 +878,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ elog(WARNING, "sequence %s info last_value %ld log_cnt %ld is_called %d",
+ quote_qualified_identifier(lrel.nspname, lrel.relname),
+ last_value, log_cnt, is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1106,10 +1206,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index ae1b391bda..55d48ce5be 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -144,6 +144,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -1091,6 +1092,61 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by ResetSequence2 */
+ elog(WARNING, "locking sequence %d in exclusive mode", relid);
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ elog(WARNING, "applying sequence %s.%s transactional %d created %d last_value %ld log_cnt %ld is_called %d",
+ seq.nspname, seq.seqname, seq.transactional, seq.created, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2374,6 +2430,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 6f6a203dea..fef568b5b2 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -49,6 +49,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -159,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -175,6 +180,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -188,6 +194,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -195,6 +202,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -260,6 +268,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -856,6 +874,52 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(rel))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, RelationGetRelid(rel));
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ rel,
+ xid,
+ sequence_lsn,
+ transactional,
+ created,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1140,7 +1204,8 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->streamed_txns = NIL;
entry->replicate_valid = false;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->publish_as_relid = InvalidOid;
entry->map = NULL; /* will be set by maybe_send_schema() if
* needed */
@@ -1160,6 +1225,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
List *schemaPubids = GetSchemaPublications(schemaId);
ListCell *lc;
Oid publish_as_relid = relid;
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1183,12 +1249,23 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1243,10 +1320,12 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
}
if (entry->pubactions.pubinsert && entry->pubactions.pubupdate &&
- entry->pubactions.pubdelete && entry->pubactions.pubtruncate)
+ entry->pubactions.pubdelete && entry->pubactions.pubtruncate &&
+ entry->pubactions.pubsequence)
break;
}
@@ -1391,6 +1470,7 @@ rel_sync_cache_publication_cb(Datum arg, int cacheid, uint32 hashvalue)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
}
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 9fa9e671a1..1c3286a38c 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5605,6 +5605,7 @@ GetRelationPublicationActions(Relation relation)
pubactions->pubupdate |= pubform->pubupdate;
pubactions->pubdelete |= pubform->pubdelete;
pubactions->pubtruncate |= pubform->pubtruncate;
+ pubactions->pubsequence |= pubform->pubsequence;
ReleaseSysCache(tup);
@@ -5613,7 +5614,8 @@ GetRelationPublicationActions(Relation relation)
* other publications.
*/
if (pubactions->pubinsert && pubactions->pubupdate &&
- pubactions->pubdelete && pubactions->pubtruncate)
+ pubactions->pubdelete && pubactions->pubtruncate &&
+ pubactions->pubsequence)
break;
}
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 4f724e4428..59a3502ae2 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1647,13 +1647,13 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> DROP */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
- COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY(Query_for_list_of_schemas
" AND nspname != 'pg_catalog' "
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 6412f369f1..be5e3b4d40 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11513,6 +11513,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 1ae439e6f3..0a05fb885c 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct Publication
@@ -79,6 +89,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -121,6 +132,9 @@ extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
Oid relid);
+extern List *GetAllSequencesPublicationRelations(void);
+extern List *GetPublicationSequenceRelations(Oid pubid);
+
extern bool is_publishable_relation(Relation rel);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *targetrel,
bool if_not_exists);
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index ddb36b5b17..00038fcea3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void AtEOXact_Sequences(bool isCommit);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 067138e6b5..44e596fd12 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3652,6 +3652,10 @@ typedef enum PublicationObjSpecType
PUBLICATIONOBJ_TABLE_IN_SCHEMA, /* Tables in schema type */
PUBLICATIONOBJ_TABLE_IN_CUR_SCHEMA, /* Get the first element from
* search_path */
+ PUBLICATIONOBJ_SEQUENCE, /* Sequence type */
+ PUBLICATIONOBJ_SEQUENCE_IN_SCHEMA, /* Sequences in schema type */
+ PUBLICATIONOBJ_SEQUENCE_IN_CUR_SCHEMA, /* Get the first element from
+ * search_path */
PUBLICATIONOBJ_CONTINUATION /* Continuation of previous type */
} PublicationObjSpecType;
@@ -3671,6 +3675,7 @@ typedef struct CreatePublicationStmt
List *options; /* List of DefElem nodes */
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
} CreatePublicationStmt;
typedef struct AlterPublicationStmt
@@ -3687,6 +3692,7 @@ typedef struct AlterPublicationStmt
*/
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
DefElemAction action; /* What action to perform with the
* tables/schemas */
} AlterPublicationStmt;
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 83741dcf42..d468c3ccfb 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -60,6 +60,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -117,6 +118,19 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ bool created;
+ int64 last_value;
+ int64 log_cnt; /* XXX probably not needed? */
+ bool is_called; /* XXX probably not needed? */
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -227,6 +241,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index 0dc460fb70..97eb2a7ef1 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -28,6 +28,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 2fa00a3c29..b32a1e6729 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1451,6 +1451,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gpt(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gpt.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/subscription/t/023_sequences.pl b/src/test/subscription/t/023_sequences.pl
new file mode 100644
index 0000000000..c6c438cdbf
--- /dev/null
+++ b/src/test/subscription/t/023_sequences.pl
@@ -0,0 +1,196 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More tests => 6;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub WITH (slot_name = seq_sub_slot)"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a rolled-back transaction - should be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the new sequence in a transaction, and roll it back - in this case
+# it should be replicated as the behavior is non-transactional
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+$node_subscriber->stop('fast');
+$node_publisher->stop('fast');
--
2.31.1
0003-Add-support-for-decoding-sequences-to-test_-20211118.patchtext/x-patch; charset=UTF-8; name=0003-Add-support-for-decoding-sequences-to-test_-20211118.patchDownload
From 0666c3a34955fc2edb811b36c081d271855d569e Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 24 Sep 2021 00:42:04 +0200
Subject: [PATCH 3/4] Add support for decoding sequences to test_decoding
---
contrib/test_decoding/Makefile | 3 +-
contrib/test_decoding/expected/sequence.out | 327 ++++++++++++++++++++
contrib/test_decoding/sql/sequence.sql | 119 +++++++
contrib/test_decoding/test_decoding.c | 69 +++++
4 files changed, 517 insertions(+), 1 deletion(-)
create mode 100644 contrib/test_decoding/expected/sequence.out
create mode 100644 contrib/test_decoding/sql/sequence.sql
diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index 9a31e0b879..56ddc3abae 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -5,7 +5,8 @@ PGFILEDESC = "test_decoding - example of a logical decoding output plugin"
REGRESS = ddl xact rewrite toast permissions decoding_in_xact \
decoding_into_rel binary prepared replorigin time messages \
- spill slot truncate stream stats twophase twophase_stream
+ spill slot truncate stream stats twophase twophase_stream \
+ sequence
ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \
oldest_xmin snapshot_transfer subxact_without_top concurrent_stream \
twophase_snapshot
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
new file mode 100644
index 0000000000..beecc3e0c8
--- /dev/null
+++ b/contrib/test_decoding/expected/sequence.out
@@ -0,0 +1,327 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+ ?column?
+----------
+ init
+(1 row)
+
+CREATE SEQUENCE test_sequence;
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4
+(1 row)
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 14
+(1 row)
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4000
+(1 row)
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, true);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, false);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3500
+(1 row)
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:1 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 created:0 last_value:33 log_cnt:0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:4 log_cnt:0 is_called:1
+ COMMIT
+ sequence public.test_sequence: transactional:0 created:0 last_value:334 log_cnt:0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:14 log_cnt:32 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:14 log_cnt:0 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:4000 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 created:0 last_value:4320 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3500 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3500 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3500 log_cnt:0 is_called:0
+ sequence public.test_sequence: transactional:0 created:0 last_value:3820 log_cnt:0 is_called:1
+(24 rows)
+
+DROP SEQUENCE test_sequence;
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3002
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3003
+(1 row)
+
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 6001
+(1 row)
+
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3003 | 30 | t
+(1 row)
+
+SELECT nextval('test_table_a_seq');
+ nextval
+---------
+ 3004
+(1 row)
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+---------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 created:1 last_value:1 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_table_a_seq: transactional:0 created:0 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 created:0 last_value:3000 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 created:0 last_value:3033 log_cnt:0 is_called:1
+ BEGIN
+ COMMIT
+(8 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+-------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 created:1 last_value:1 log_cnt:0 is_called:0
+ sequence public.test_table_a_seq: transactional:1 created:0 last_value:33 log_cnt:0 is_called:1
+ table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ COMMIT
+(6 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ setval
+--------
+ 5000
+(1 row)
+
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3001 | 32 | t
+(1 row)
+
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:1 log_cnt:0 is_called:0
+ sequence public.test_sequence: transactional:1 created:0 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 created:0 last_value:3000 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 created:0 last_value:3033 log_cnt:0 is_called:1
+ COMMIT
+(6 rows)
+
+SELECT pg_drop_replication_slot('regression_slot');
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
new file mode 100644
index 0000000000..d8a34738f3
--- /dev/null
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -0,0 +1,119 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+
+CREATE SEQUENCE test_sequence;
+
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, true);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, false);
+SELECT nextval('test_sequence');
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+DROP SEQUENCE test_sequence;
+
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+SELECT nextval('test_table_a_seq');
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index e5cd84e85e..45765d299e 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -35,6 +35,7 @@ typedef struct
bool include_timestamp;
bool skip_empty_xacts;
bool only_local;
+ bool skip_sequences;
} TestDecodingData;
/*
@@ -76,6 +77,10 @@ static void pg_decode_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
TransactionId xid,
const char *gid);
@@ -116,6 +121,10 @@ static void pg_decode_stream_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_stream_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static void pg_decode_stream_truncate(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
int nrelations, Relation relations[],
@@ -141,6 +150,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->filter_by_origin_cb = pg_decode_filter;
cb->shutdown_cb = pg_decode_shutdown;
cb->message_cb = pg_decode_message;
+ cb->sequence_cb = pg_decode_sequence;
cb->filter_prepare_cb = pg_decode_filter_prepare;
cb->begin_prepare_cb = pg_decode_begin_prepare_txn;
cb->prepare_cb = pg_decode_prepare_txn;
@@ -153,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pg_decode_stream_commit;
cb->stream_change_cb = pg_decode_stream_change;
cb->stream_message_cb = pg_decode_stream_message;
+ cb->stream_sequence_cb = pg_decode_stream_sequence;
cb->stream_truncate_cb = pg_decode_stream_truncate;
}
@@ -175,6 +186,9 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
data->skip_empty_xacts = false;
data->only_local = false;
+ /* skip sequences by default for backwards compatibility */
+ data->skip_sequences = true;
+
ctx->output_plugin_private = data;
opt->output_type = OUTPUT_PLUGIN_TEXTUAL_OUTPUT;
@@ -265,6 +279,17 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "skip-sequences") == 0)
+ {
+
+ if (elem->arg == NULL)
+ continue; /* true by default */
+ else if (!parse_bool(strVal(elem->arg), &data->skip_sequences))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not parse value \"%s\" for parameter \"%s\"",
+ strVal(elem->arg), elem->defname)));
+ }
else
{
ereport(ERROR,
@@ -744,6 +769,28 @@ pg_decode_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d created:%d last_value:%zu log_cnt:%zu is_called:%d",
+ transactional, created, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
static void
pg_decode_stream_start(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn)
@@ -943,6 +990,28 @@ pg_decode_stream_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_stream_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "streaming sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d created:%d last_value:%zu log_cnt:%zu is_called:%d",
+ transactional, created, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* In streaming mode, we don't display the detailed information of Truncate.
* See pg_decode_stream_change.
--
2.31.1
0002-rework-20211118.patchtext/x-patch; charset=UTF-8; name=0002-rework-20211118.patchDownload
From 43f63100b0e792a3f6e96c9c4f2218b27a731f52 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Thu, 18 Nov 2021 19:48:30 +0100
Subject: [PATCH 2/4] rework
---
src/backend/access/rmgrdesc/logicalmsgdesc.c | 13 ++
src/backend/access/transam/xact.c | 9 +
src/backend/commands/sequence.c | 180 +++++++++------
src/backend/replication/logical/decode.c | 104 +++------
src/backend/replication/logical/message.c | 3 +-
.../replication/logical/reorderbuffer.c | 213 ++++--------------
src/include/commands/sequence.h | 2 +
src/include/replication/message.h | 15 ++
src/include/replication/reorderbuffer.h | 10 +-
9 files changed, 232 insertions(+), 317 deletions(-)
diff --git a/src/backend/access/rmgrdesc/logicalmsgdesc.c b/src/backend/access/rmgrdesc/logicalmsgdesc.c
index d64ce2e7ef..c84628cd54 100644
--- a/src/backend/access/rmgrdesc/logicalmsgdesc.c
+++ b/src/backend/access/rmgrdesc/logicalmsgdesc.c
@@ -40,6 +40,16 @@ logicalmsg_desc(StringInfo buf, XLogReaderState *record)
sep = " ";
}
}
+ else if (info == XLOG_LOGICAL_SEQUENCE)
+ {
+ xl_logical_sequence *xlrec = (xl_logical_sequence *) rec;
+
+ appendStringInfo(buf, "rel %u/%u/%u last: %lu log_cnt: %lu is_called: %d",
+ xlrec->node.spcNode,
+ xlrec->node.dbNode,
+ xlrec->node.relNode,
+ xlrec->last, xlrec->log_cnt, xlrec->is_called);
+ }
}
const char *
@@ -48,5 +58,8 @@ logicalmsg_identify(uint8 info)
if ((info & ~XLR_INFO_MASK) == XLOG_LOGICAL_MESSAGE)
return "MESSAGE";
+ if ((info & ~XLR_INFO_MASK) == XLOG_LOGICAL_SEQUENCE)
+ return "SEQUENCE";
+
return NULL;
}
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index 8e35c432f5..eac8180250 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -36,6 +36,7 @@
#include "catalog/storage.h"
#include "commands/async.h"
#include "commands/tablecmds.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/spi.h"
#include "libpq/be-fsstubs.h"
@@ -2220,6 +2221,9 @@ CommitTransaction(void)
/* Prevent cancel/die interrupt while cleaning up */
HOLD_INTERRUPTS();
+ /* Write state of sequences to WAL */
+ AtEOXact_Sequences(true);
+
/* Commit updates to the relation map --- do this as late as possible */
AtEOXact_RelationMap(true, is_parallel_worker);
@@ -2378,6 +2382,8 @@ CommitTransaction(void)
* PrepareTransaction
*
* NB: if you change this routine, better look at CommitTransaction too!
+ *
+ * XXX Does this need to do something about logging of sequences?
*/
static void
PrepareTransaction(void)
@@ -2776,6 +2782,9 @@ AbortTransaction(void)
AtEOXact_RelationMap(false, is_parallel_worker);
AtAbort_Twophase();
+ /* XXX not sure we need to do anything about sequences at abort */
+ AtEOXact_Sequences(false);
+
/*
* Advertise the fact that we aborted in pg_xact (assuming that we got as
* far as assigning an XID to advertise). But if we're inside a parallel
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a98fcc2e97..92d1966f34 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -37,6 +37,7 @@
#include "miscadmin.h"
#include "nodes/makefuncs.h"
#include "parser/parse_type.h"
+#include "replication/message.h"
#include "storage/lmgr.h"
#include "storage/proc.h"
#include "storage/smgr.h"
@@ -75,6 +76,7 @@ typedef struct SeqTableData
{
Oid relid; /* pg_class OID of this sequence (hash key) */
Oid filenode; /* last seen relfilenode of this sequence */
+ Oid tablespace; /* last seen tablespace of this sequence */
LocalTransactionId lxid; /* xact in which we last did a seq op */
bool last_valid; /* do we have a valid "last" value? */
int64 last; /* value last returned by nextval */
@@ -82,6 +84,10 @@ typedef struct SeqTableData
/* if last != cached, we have not used up all the cached values */
int64 increment; /* copy of sequence's increment field */
/* note that increment is zero until we first do nextval_internal() */
+ bool need_log; /* should be written to WAL at commit? */
+ bool touched;
+ int64 log_cnt;
+ bool is_called;
} SeqTableData;
typedef SeqTableData *SeqTable;
@@ -94,7 +100,7 @@ static HTAB *seqhashtab = NULL; /* hash table for SeqTable items */
*/
static SeqTableData *last_used_seq = NULL;
-static void fill_seq_with_data(Relation rel, HeapTuple tuple, bool create);
+static void fill_seq_with_data(Relation rel, HeapTuple tuple);
static Relation lock_and_open_sequence(SeqTable seq);
static void create_seq_hashtable(void);
static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
@@ -222,7 +228,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
/* now initialize the sequence's data */
tuple = heap_form_tuple(tupDesc, value, null);
- fill_seq_with_data(rel, tuple, true);
+ fill_seq_with_data(rel, tuple);
/* process OWNED BY if given */
if (owned_by)
@@ -327,12 +333,17 @@ ResetSequence(Oid seq_relid)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seq_rel, tuple, true);
+ fill_seq_with_data(seq_rel, tuple);
/* Clear local cache so that we don't think we have cached numbers */
/* Note that we do not change the currval() state */
elm->cached = elm->last;
+ /* Remember we need to write the sequence to WAL at commit. */
+ elm->need_log = true;
+ elm->is_called = false;
+ elm->log_cnt = 0;
+
relation_close(seq_rel, NoLock);
}
@@ -340,7 +351,7 @@ ResetSequence(Oid seq_relid)
* Initialize a sequence's relation with the specified tuple as content
*/
static void
-fill_seq_with_data(Relation rel, HeapTuple tuple, bool create)
+fill_seq_with_data(Relation rel, HeapTuple tuple)
{
Buffer buf;
Page page;
@@ -380,27 +391,7 @@ fill_seq_with_data(Relation rel, HeapTuple tuple, bool create)
if (RelationNeedsWAL(rel))
{
GetTopTransactionId();
-
- /*
- * Ensure we have a proper XID, which will be included in the XLOG
- * record by XLogRecordAssemble. Otherwise the first nextval() in
- * a subxact (without any preceding changes) would get XID 0,
- * and it'd be impossible to decide which top xact it belongs to.
- * It'd also trigger assert in DecodeSequence.
- *
- * XXX This might seem unnecessary, because if there's no XID the
- * transaction couldn't have done anything important yet, e.g. it
- * could not have created a sequence. But that's incorrect, as it
- * ignores subtransactions. The current subtransaction might not
- * have done anything yet (thus no XID), but an earlier one might
- * have created the sequence.
- *
- * XXX Not sure if this is the best solution. Maybe do this only
- * with wal_level=logical to minimize the overhead. OTOH advancing
- * the sequence is likely followed by using the value(s) for some
- * other activity, which assigns the XID.
- */
- GetCurrentTransactionId();
+ // GetCurrentTransactionId();
}
START_CRIT_SECTION();
@@ -422,7 +413,6 @@ fill_seq_with_data(Relation rel, HeapTuple tuple, bool create)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = rel->rd_node;
- xlrec.created = create;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) tuple->t_data, tuple->t_len);
@@ -526,7 +516,11 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seqrel, newdatatuple, true);
+ fill_seq_with_data(seqrel, newdatatuple);
+
+ elm->need_log = true;
+ elm->is_called = false;
+ elm->log_cnt = 0;
}
/* process OWNED BY if given */
@@ -792,27 +786,7 @@ nextval_internal(Oid relid, bool check_permissions)
if (logit && RelationNeedsWAL(seqrel))
{
GetTopTransactionId();
-
- /*
- * Ensure we have a proper XID, which will be included in the XLOG
- * record by XLogRecordAssemble. Otherwise the first nextval() in
- * a subxact (without any preceding changes) would get XID 0,
- * and it'd be impossible to decide which top xact it belongs to.
- * It'd also trigger assert in DecodeSequence.
- *
- * XXX This might seem unnecessary, because if there's no XID the
- * transaction couldn't have done anything important yet, e.g. it
- * could not have created a sequence. But that's incorrect, as it
- * ignores subtransactions. The current subtransaction might not
- * have done anything yet (thus no XID), but an earlier one might
- * have created the sequence.
- *
- * XXX Not sure if this is the best solution. Maybe do this only
- * with wal_level=logical to minimize the overhead. OTOH advancing
- * the sequence is likely followed by using the value(s) for some
- * other activity, which assigns the XID.
- */
- GetCurrentTransactionId();
+ // GetCurrentTransactionId();
}
/* ready to change the on-disk (or really, in-buffer) tuple */
@@ -850,7 +824,6 @@ nextval_internal(Oid relid, bool check_permissions)
seq->log_cnt = 0;
xlrec.node = seqrel->rd_node;
- xlrec.created = false;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -858,6 +831,10 @@ nextval_internal(Oid relid, bool check_permissions)
recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);
PageSetLSN(page, recptr);
+
+ elm->need_log = true;
+ elm->is_called = true;
+ elm->log_cnt = 0;
}
/* Now update sequence tuple to the intended final state */
@@ -1027,27 +1004,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
if (RelationNeedsWAL(seqrel))
{
GetTopTransactionId();
-
- /*
- * Ensure we have a proper XID, which will be included in the XLOG
- * record by XLogRecordAssemble. Otherwise the first nextval() in
- * a subxact (without any preceding changes) would get XID 0,
- * and it'd be impossible to decide which top xact it belongs to.
- * It'd also trigger assert in DecodeSequence.
- *
- * XXX This might seem unnecessary, because if there's no XID the
- * transaction couldn't have done anything important yet, e.g. it
- * could not have created a sequence. But that's incorrect, as it
- * ignores subtransactions. The current subtransaction might not
- * have done anything yet (thus no XID), but an earlier one might
- * have created the sequence.
- *
- * XXX Not sure if this is the best solution. Maybe do this only
- * with wal_level=logical to minimize the overhead. OTOH advancing
- * the sequence is likely followed by using the value(s) for some
- * other activity, which assigns the XID.
- */
- GetCurrentTransactionId();
+ // GetCurrentTransactionId();
}
/* ready to change the on-disk (or really, in-buffer) tuple */
@@ -1070,14 +1027,16 @@ do_setval(Oid relid, int64 next, bool iscalled)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = seqrel->rd_node;
- xlrec.created = false;
-
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);
PageSetLSN(page, recptr);
+
+ elm->need_log = true;
+ elm->is_called = false;
+ elm->log_cnt = 0;
}
END_CRIT_SECTION();
@@ -1196,11 +1155,14 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
{
/* relid already filled in */
elm->filenode = InvalidOid;
+ elm->tablespace = InvalidOid;
elm->lxid = InvalidLocalTransactionId;
elm->last_valid = false;
elm->last = elm->cached = 0;
}
+ elm->touched = true;
+
/*
* Open the sequence relation.
*/
@@ -1219,6 +1181,7 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
*/
if (seqrel->rd_rel->relfilenode != elm->filenode)
{
+ elm->tablespace = seqrel->rd_rel->reltablespace;
elm->filenode = seqrel->rd_rel->relfilenode;
elm->cached = elm->last;
}
@@ -1977,6 +1940,8 @@ seq_redo(XLogReaderState *record)
/*
* Flush cached sequence information.
+ *
+ * XXX Do we need to WAL-log the entries based on need_log?
*/
void
ResetSequenceCaches(void)
@@ -2000,3 +1965,74 @@ seq_mask(char *page, BlockNumber blkno)
mask_unused_space(page);
}
+
+/* XXX Do this only for wal_level = logical, probably? */
+void
+AtEOXact_Sequences(bool isCommit)
+{
+ SeqTable entry;
+ HASH_SEQ_STATUS scan;
+
+ if (!seqhashtab)
+ return;
+
+ /*
+ * XXX If we run just nextval() that returns cached value, the xact may
+ * not have XID, but we expect it in reorderbuffer. Maybe treat that as
+ * transactional?
+ */
+ // GetTopTransactionId();
+ // GetCurrentTransactionId();
+
+ hash_seq_init(&scan, seqhashtab);
+
+ while ((entry = (SeqTable) hash_seq_search(&scan)))
+ {
+ RelFileNode rnode;
+ xl_logical_sequence xlrec;
+
+ if (!isCommit)
+ entry->touched = false;
+
+ /*
+ * If not touched in the current transaction, don't log anything.
+ * We leave needs_log set, so that if future transactions touch
+ * the sequence we'll log it properly.
+ */
+ if (!entry->touched)
+ continue;
+
+ /*
+ * Likewise, if the sequence does not need logging, we're done.
+ */
+ if (!entry->need_log)
+ continue;
+
+ /* if this is commit, we'll log the */
+ entry->need_log = false;
+ entry->touched = false;
+
+ rnode.spcNode = entry->tablespace; /* tablespace */
+ rnode.dbNode = MyDatabaseId; /* database */
+ rnode.relNode = entry->filenode; /* relation */
+
+ xlrec.node = rnode;
+ xlrec.reloid = entry->relid;
+
+ /* XXX is it good enough to log values we have in cache? seems
+ * wrong and we may need to re-read that. */
+ // xlrec.dbId = MyDatabaseId;
+ // xlrec.relId = entry->relid;
+ xlrec.last = entry->last;
+ xlrec.log_cnt = entry->log_cnt;
+ xlrec.is_called = entry->is_called;
+
+ XLogBeginInsert();
+ XLogRegisterData((char *) &xlrec, SizeOfLogicalSequence);
+
+ /* allow origin filtering */
+ XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
+
+ (void) XLogInsert(RM_LOGICALMSG_ID, XLOG_LOGICAL_SEQUENCE);
+ }
+}
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index b61f2e264a..d29844adf9 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -59,6 +59,9 @@ static void DecodeXactOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
static void DecodeStandbyOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
static void DecodeLogicalMsgOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
+static void DecodeLogicalMessage(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
+static void DecodeLogicalSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
+
/* individual record(group)'s handlers */
static void DecodeInsert(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
static void DecodeUpdate(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
@@ -75,11 +78,9 @@ static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
bool two_phase);
static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_prepare *parsed);
-static void DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
/* common function to decode tuples */
static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);
-static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);
/* helper functions for decoding transactions */
static inline bool FilterPrepare(LogicalDecodingContext *ctx,
@@ -160,10 +161,6 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
DecodeLogicalMsgOp(ctx, &buf);
break;
- case RM_SEQ_ID:
- DecodeSequence(ctx, &buf);
- break;
-
/*
* Rmgrs irrelevant for logical decoding; they describe stuff not
* represented in logical decoding. Add new rmgrs in rmgrlist.h's
@@ -626,6 +623,27 @@ FilterByOrigin(LogicalDecodingContext *ctx, RepOriginId origin_id)
*/
static void
DecodeLogicalMsgOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ XLogReaderState *r = buf->record;
+ uint8 info = XLogRecGetInfo(r) & ~XLR_INFO_MASK;
+
+ switch (info)
+ {
+ case XLOG_LOGICAL_MESSAGE:
+ DecodeLogicalMessage(ctx, buf);
+ break;
+
+ case XLOG_LOGICAL_SEQUENCE:
+ DecodeLogicalSequence(ctx, buf);
+ break;
+
+ default:
+ elog(ERROR, "unexpected RM_LOGICALMSG_ID record type: %u", info);
+ }
+}
+
+static void
+DecodeLogicalMessage(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
{
SnapBuild *builder = ctx->snapshot_builder;
XLogReaderState *r = buf->record;
@@ -1319,31 +1337,6 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
-/*
- * Decode Sequence Tuple
- */
-static void
-DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
-{
- int datalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;
-
- Assert(datalen >= 0);
-
- tuple->tuple.t_len = datalen + SizeofHeapTupleHeader;;
-
- ItemPointerSetInvalid(&tuple->tuple.t_self);
-
- tuple->tuple.t_tableOid = InvalidOid;
-
- memcpy(((char *) tuple->tuple.t_data),
- data + sizeof(xl_seq_rec),
- SizeofHeapTupleHeader);
-
- memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,
- data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,
- datalen);
-}
-
/*
* Handle sequence decode
*
@@ -1362,24 +1355,20 @@ DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
* plugin - it might get confused about which sequence it's related to etc.
*/
static void
-DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+DecodeLogicalSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
{
SnapBuild *builder = ctx->snapshot_builder;
- ReorderBufferTupleBuf *tuplebuf;
RelFileNode target_node;
XLogReaderState *r = buf->record;
- char *tupledata = NULL;
- Size tuplelen;
- Size datalen = 0;
TransactionId xid = XLogRecGetXid(r);
uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
- xl_seq_rec *xlrec;
+ xl_logical_sequence *xlrec;
Snapshot snapshot;
RepOriginId origin_id = XLogRecGetOrigin(r);
- bool transactional;
+ bool transactional = TransactionIdIsValid(xid);
/* only decode changes flagged with XLOG_SEQ_LOG */
- if (info != XLOG_SEQ_LOG)
+ if (info != XLOG_LOGICAL_SEQUENCE)
elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
/*
@@ -1390,8 +1379,11 @@ DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
ctx->fast_forward)
return;
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_logical_sequence *) XLogRecGetData(r);
+
/* only interested in our database */
- XLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);
+ target_node = xlrec->node;
if (target_node.dbNode != ctx->slot->data.database)
return;
@@ -1399,44 +1391,18 @@ DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
return;
- tupledata = XLogRecGetData(r);
- datalen = XLogRecGetDataLen(r);
- tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);
-
- /* extract the WAL record, with "created" flag */
- xlrec = (xl_seq_rec *) XLogRecGetData(r);
-
- /* XXX how could we have sequence change without data? */
- if(!datalen || !tupledata)
- return;
-
- tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
- DecodeSeqTuple(tupledata, datalen, tuplebuf);
-
- /*
- * Should we handle the sequence increment as transactional or not?
- *
- * If the sequence was created in a still-running transaction, treat
- * it as transactional and queue the increments. Otherwise it needs
- * to be treated as non-transactional, in which case we send it to
- * the plugin right away.
- */
- transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
- target_node,
- xlrec->created);
-
/* Skip the change if already processed (per the snapshot). */
if (transactional &&
!SnapBuildProcessChange(builder, xid, buf->origptr))
return;
else if (!transactional &&
(SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
- SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ SnapBuildXactNeedsSkip(builder, buf->origptr)))
return;
/* Queue the increment (or send immediately if not transactional). */
snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
- origin_id, target_node, transactional,
- xlrec->created, tuplebuf);
+ origin_id, xlrec->reloid, target_node,
+ xlrec->last, xlrec->log_cnt, xlrec->is_called);
}
diff --git a/src/backend/replication/logical/message.c b/src/backend/replication/logical/message.c
index 93bd372421..3f0ac57fc6 100644
--- a/src/backend/replication/logical/message.c
+++ b/src/backend/replication/logical/message.c
@@ -81,7 +81,8 @@ logicalmsg_redo(XLogReaderState *record)
{
uint8 info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
- if (info != XLOG_LOGICAL_MESSAGE)
+ if (info != XLOG_LOGICAL_MESSAGE &&
+ info != XLOG_LOGICAL_SEQUENCE)
elog(PANIC, "logicalmsg_redo: unknown op code %u", info);
/* This is only interesting for logical decoding, see decode.c. */
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 434926459f..1d9a39ac4b 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -569,17 +569,11 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
change->data.truncate.relids = NULL;
}
break;
- case REORDER_BUFFER_CHANGE_SEQUENCE:
- if (change->data.sequence.tuple)
- {
- ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
- change->data.sequence.tuple = NULL;
- }
- break;
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
break;
}
@@ -973,9 +967,10 @@ ReorderBufferSequenceCleanup(ReorderBuffer *rb, TransactionId xid)
void
ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
- RelFileNode rnode, bool transactional, bool created,
- ReorderBufferTupleBuf *tuplebuf)
+ Oid reloid, RelFileNode rnode, int64 last, int64 log_cnt, bool is_called)
{
+ bool transactional = TransactionIdIsValid(xid);
+
/*
* Change needs to be handled as transactional, because the sequence was
* created in a transaction that is still running. In that case all the
@@ -992,38 +987,6 @@ ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
MemoryContext oldcontext;
ReorderBufferChange *change;
- /* lookup sequence by relfilenode */
- ReorderBufferSequenceEnt *ent;
- bool found;
-
- /* transactional changes require a transaction */
- Assert(xid != InvalidTransactionId);
-
- /* search the lookup table (we ignore the return value, found is enough) */
- ent = hash_search(rb->sequences,
- (void *) &rnode,
- created ? HASH_ENTER : HASH_FIND,
- &found);
-
- /*
- * If this is the "create" increment, we must not have found any
- * pre-existing entry in the hash table (i.e. there must not be
- * any conflicting sequence).
- */
- Assert(!(created && found));
-
- /* But we must have either created or found an existing entry. */
- Assert(created || found);
-
- /*
- * When creating the sequence, remember the XID of the transaction
- * that created id.
- */
- if (created)
- ent->xid = xid;
-
- /* XXX Maybe check that we're still in the same top-level xact? */
-
/* OK, allocate and queue the change */
oldcontext = MemoryContextSwitchTo(rb->context);
@@ -1034,38 +997,26 @@ ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
- change->data.sequence.created = created;
- change->data.sequence.tuple = tuplebuf;
+ change->data.sequence.reloid = reloid;
+ change->data.sequence.last = last;
+ change->data.sequence.log_cnt = log_cnt;
+ change->data.sequence.is_called = is_called;
/* add it to the same subxact that created the sequence */
- ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+ ReorderBufferQueueChange(rb, xid, lsn, change, false);
MemoryContextSwitchTo(oldcontext);
}
else
{
/*
- * This increment is for a sequence that was not created in any
- * running transaction, so we treat it as non-transactional and
- * just send it to the output plugin directly.
- */
+ * This increment is for a sequence that was not created in any
+ * running transaction, so we treat it as non-transactional and
+ * just send it to the output plugin directly.
+ */
ReorderBufferTXN *txn = NULL;
volatile Snapshot snapshot_now = snapshot;
- bool using_subtxn;
-
-#ifdef USE_ASSERT_CHECKING
- /* All "creates" have to be handled as transactional. */
- Assert(!created);
-
- /* Make sure the sequence is not in the hash table. */
- {
- bool found;
- hash_search(rb->sequences,
- (void *) &rnode,
- HASH_FIND, &found);
- Assert(!found);
- }
-#endif
+ bool using_subtxn;
if (xid != InvalidTransactionId)
txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
@@ -1074,40 +1025,35 @@ ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
SetupHistoricSnapshot(snapshot_now, NULL);
/*
- * Decoding needs access to syscaches et al., which in turn use
- * heavyweight locks and such. Thus we need to have enough state around to
- * keep track of those. The easiest way is to simply use a transaction
- * internally. That also allows us to easily enforce that nothing writes
- * to the database by checking for xid assignments.
- *
- * When we're called via the SQL SRF there's already a transaction
- * started, so start an explicit subtransaction there.
- */
+ * Decoding needs access to syscaches et al., which in turn use
+ * heavyweight locks and such. Thus we need to have enough state around to
+ * keep track of those. The easiest way is to simply use a transaction
+ * internally. That also allows us to easily enforce that nothing writes
+ * to the database by checking for xid assignments.
+ *
+ * When we're called via the SQL SRF there's already a transaction
+ * started, so start an explicit subtransaction there.
+ */
using_subtxn = IsTransactionOrTransactionBlock();
PG_TRY();
{
Relation relation;
- HeapTuple tuple;
- bool isnull;
- int64 last_value, log_cnt;
- bool is_called;
- Oid reloid;
+// Oid reloid;
if (using_subtxn)
BeginInternalSubTransaction("sequence");
else
StartTransactionCommand();
-
+/*
reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
if (reloid == InvalidOid)
elog(ERROR, "could not map filenode \"%s\" to relation OID",
- relpathperm(rnode,
- MAIN_FORKNUM));
-
+ relpathperm(rnode,
+ MAIN_FORKNUM));
+*/
relation = RelationIdGetRelation(reloid);
- tuple = &tuplebuf->tuple;
/*
* Extract the internal sequence values, describing the state.
@@ -1115,12 +1061,8 @@ ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
* XXX Seems a bit strange to access it directly. Maybe there's
* a better / more correct way?
*/
- last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
- log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
- is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
-
- rb->sequence(rb, txn, lsn, relation, transactional, created,
- last_value, log_cnt, is_called);
+ rb->sequence(rb, txn, lsn, relation, transactional, false,
+ last, log_cnt, is_called);
RelationClose(relation);
@@ -2248,31 +2190,27 @@ ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
Relation relation, ReorderBufferChange *change,
bool streaming)
{
- HeapTuple tuple;
- bool isnull;
int64 last_value, log_cnt;
bool is_called;
- tuple = &change->data.sequence.tuple->tuple;
-
/*
* Extract the internal sequence values, describing the state.
*
* XXX Seems a bit strange to access it directly. Maybe there's
* a better / more correct way?
*/
- last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
- log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
- is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+ last_value = change->data.sequence.last;
+ log_cnt = change->data.sequence.log_cnt;
+ is_called = change->data.sequence.is_called;
/* Only ever called from ReorderBufferApplySequence, so transational. */
if (streaming)
rb->stream_sequence(rb, txn, change->lsn, relation, true,
- change->data.sequence.created,
+ false, /* FIXME */
last_value, log_cnt, is_called);
else
rb->sequence(rb, txn, change->lsn, relation, true,
- change->data.sequence.created,
+ false, /* FIXME */
last_value, log_cnt, is_called);
}
@@ -2722,15 +2660,17 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_SEQUENCE:
Assert(snapshot_now);
+ /*
reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
change->data.sequence.relnode.relNode);
if (reloid == InvalidOid)
- elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ elog(ERROR, "zz could not map filenode \"%s\" to relation OID",
relpathperm(change->data.sequence.relnode,
MAIN_FORKNUM));
- relation = RelationIdGetRelation(reloid);
+ */
+ relation = RelationIdGetRelation(change->data.sequence.reloid);
if (!RelationIsValid(relation))
elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
@@ -4127,45 +4067,13 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
- break;
- }
- case REORDER_BUFFER_CHANGE_SEQUENCE:
- {
- char *data;
- ReorderBufferTupleBuf *tup;
- Size len = 0;
-
- tup = change->data.sequence.tuple;
-
- if (tup)
- {
- sz += sizeof(HeapTupleData);
- len = tup->tuple.t_len;
- sz += len;
- }
-
- /* make sure we have enough space */
- ReorderBufferSerializeReserve(rb, sz);
-
- data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
- /* might have been reallocated above */
- ondisk = (ReorderBufferDiskChange *) rb->outbuf;
-
- if (len)
- {
- memcpy(data, &tup->tuple, sizeof(HeapTupleData));
- data += sizeof(HeapTupleData);
-
- memcpy(data, tup->tuple.t_data, len);
- data += len;
- }
-
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
/* ReorderBufferChange contains everything important */
break;
}
@@ -4424,28 +4332,13 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
- break;
- }
- case REORDER_BUFFER_CHANGE_SEQUENCE:
- {
- ReorderBufferTupleBuf *tup;
- Size len = 0;
-
- tup = change->data.sequence.tuple;
-
- if (tup)
- {
- sz += sizeof(HeapTupleData);
- len = tup->tuple.t_len;
- sz += len;
- }
-
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
/* ReorderBufferChange contains everything important */
break;
}
@@ -4742,33 +4635,11 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
- case REORDER_BUFFER_CHANGE_SEQUENCE:
- if (change->data.sequence.tuple)
- {
- uint32 tuplelen = ((HeapTuple) data)->t_len;
-
- change->data.sequence.tuple =
- ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
-
- /* restore ->tuple */
- memcpy(&change->data.sequence.tuple->tuple, data,
- sizeof(HeapTupleData));
- data += sizeof(HeapTupleData);
-
- /* reset t_data pointer into the new tuplebuf */
- change->data.sequence.tuple->tuple.t_data =
- ReorderBufferTupleBufData(change->data.sequence.tuple);
-
- /* restore tuple data itself */
- memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
- data += tuplelen;
- }
- break;
-
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
break;
}
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 5919fb90ee..ddb36b5b17 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -62,6 +62,8 @@ extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
extern void ResetSequenceCaches(void);
+extern void AtEOXact_Sequences(bool isCommit);
+
extern void seq_redo(XLogReaderState *rptr);
extern void seq_desc(StringInfo buf, XLogReaderState *rptr);
extern const char *seq_identify(uint8 info);
diff --git a/src/include/replication/message.h b/src/include/replication/message.h
index d3fb324c81..a39e8e984d 100644
--- a/src/include/replication/message.h
+++ b/src/include/replication/message.h
@@ -27,13 +27,28 @@ typedef struct xl_logical_message
char message[FLEXIBLE_ARRAY_MEMBER];
} xl_logical_message;
+/*
+ * Generic logical decoding sequence wal record.
+ */
+typedef struct xl_logical_sequence
+{
+ RelFileNode node;
+ Oid reloid;
+ int64 last; /* last value emitted for sequence */
+ int64 log_cnt; /* last value cached for sequence */
+ bool is_called;
+} xl_logical_sequence;
+
#define SizeOfLogicalMessage (offsetof(xl_logical_message, message))
+#define SizeOfLogicalSequence (sizeof(xl_logical_sequence))
extern XLogRecPtr LogLogicalMessage(const char *prefix, const char *message,
size_t size, bool transactional);
/* RMGR API*/
#define XLOG_LOGICAL_MESSAGE 0x00
+#define XLOG_LOGICAL_SEQUENCE 0x10
+
void logicalmsg_redo(XLogReaderState *record);
void logicalmsg_desc(StringInfo buf, XLogReaderState *record);
const char *logicalmsg_identify(uint8 info);
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index 63e6ed037b..eb78d5dff4 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -164,8 +164,10 @@ typedef struct ReorderBufferChange
struct
{
RelFileNode relnode;
- bool created;
- ReorderBufferTupleBuf *tuple;
+ Oid reloid;
+ int64 last;
+ int64 log_cnt;
+ bool is_called;
} sequence;
} data;
@@ -672,8 +674,8 @@ void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapsho
Size message_size, const char *message);
void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
- RelFileNode rnode, bool transactional, bool created,
- ReorderBufferTupleBuf *tuplebuf);
+ Oid reloid, RelFileNode rnode,
+ int64 last, int64 log_cnt, bool is_called);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
--
2.31.1
0001-Logical-decoding-of-sequences-20211118.patchtext/x-patch; charset=UTF-8; name=0001-Logical-decoding-of-sequences-20211118.patchDownload
From e61a7c9994133d1a0dc25c369c026205602b3314 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 24 Sep 2021 00:41:33 +0200
Subject: [PATCH 1/4] Logical decoding of sequences
This adds the infrastructure for logical decoding of sequences. Support
for built-in logical replication and test_decoding is added separately.
When decoding sequences, we differentiate between a sequences created in
a (running) transaction, and sequences created in other (already
committed) transactions. Changes for sequences created in the same
top-level transaction are treated as "transactional" i.e. just like any
other change from that transaction (and discarded in case of a
rollback). Changes for sequences created earlier are treated as not
transactional - are processed immediately, as if performed outside any
transaction (and thus not rolled back).
This mixed behavior is necessary - sequences are non-transactional (e.g.
ROLLBACK does not undo the sequence increments). But for new sequences,
we need to handle them in a transactional way, because if we ever get
some DDL support, the sequence won't exist until the transaction gets
applied. So we need to ensure the increments don't happen until the
sequence gets created.
To differentiate which sequences are "old" and which were created in a
still-running transaction, we track sequences created in running
transactions in a hash table. Each sequence is identified by it's
relfilenode, and at transaction commit we discard all sequences created
in that particular transaction. For each sequence we track the XID of
the (sub)transaction that created it, and we cleanup the sequences for
each subtransaction when it completes (commit/rollback).
We don't use the XID to check if it's the same top-level transaction.
It's enough to know it was created in an in-progress transaction, and we
know it must be the current one because otherwise it wouldn't see the
sequence object.
The main changes in this patch are:
1) ensure WAL-logging of all necessary info for sequence advances
We need to be able to associate the advance with a XID, but until now
sequence advance might have XID 0 if it was the first thing the
transaction did. So we ensure the transaction has XID.
Note: This is needed because of subxacts. A XID 0 might still have the
sequence created in a different subxact of the same top-level xact.
Then there's a "created" flag added to the xl_seq_rec, which is
necessary to differentiate WAL for created/reset sequences.
2) decoding / queueing of sequences into ReorderBuffer
This is mostly copy-paste of existing code for other decoded events.
3) tracking sequences created in in-progress transactions
We use a simple hash table, indexed by relfilenode.
---
src/backend/commands/sequence.c | 83 +++-
src/backend/replication/logical/decode.c | 131 +++++-
src/backend/replication/logical/logical.c | 89 ++++
.../replication/logical/reorderbuffer.c | 424 ++++++++++++++++++
src/include/commands/sequence.h | 1 +
src/include/replication/output_plugin.h | 29 ++
src/include/replication/reorderbuffer.h | 44 +-
7 files changed, 793 insertions(+), 8 deletions(-)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 72bfdc07a4..a98fcc2e97 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -94,7 +94,7 @@ static HTAB *seqhashtab = NULL; /* hash table for SeqTable items */
*/
static SeqTableData *last_used_seq = NULL;
-static void fill_seq_with_data(Relation rel, HeapTuple tuple);
+static void fill_seq_with_data(Relation rel, HeapTuple tuple, bool create);
static Relation lock_and_open_sequence(SeqTable seq);
static void create_seq_hashtable(void);
static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
@@ -222,7 +222,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
/* now initialize the sequence's data */
tuple = heap_form_tuple(tupDesc, value, null);
- fill_seq_with_data(rel, tuple);
+ fill_seq_with_data(rel, tuple, true);
/* process OWNED BY if given */
if (owned_by)
@@ -327,7 +327,7 @@ ResetSequence(Oid seq_relid)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seq_rel, tuple);
+ fill_seq_with_data(seq_rel, tuple, true);
/* Clear local cache so that we don't think we have cached numbers */
/* Note that we do not change the currval() state */
@@ -340,7 +340,7 @@ ResetSequence(Oid seq_relid)
* Initialize a sequence's relation with the specified tuple as content
*/
static void
-fill_seq_with_data(Relation rel, HeapTuple tuple)
+fill_seq_with_data(Relation rel, HeapTuple tuple, bool create)
{
Buffer buf;
Page page;
@@ -378,8 +378,31 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(rel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX This might seem unnecessary, because if there's no XID the
+ * transaction couldn't have done anything important yet, e.g. it
+ * could not have created a sequence. But that's incorrect, as it
+ * ignores subtransactions. The current subtransaction might not
+ * have done anything yet (thus no XID), but an earlier one might
+ * have created the sequence.
+ *
+ * XXX Not sure if this is the best solution. Maybe do this only
+ * with wal_level=logical to minimize the overhead. OTOH advancing
+ * the sequence is likely followed by using the value(s) for some
+ * other activity, which assigns the XID.
+ */
+ GetCurrentTransactionId();
+ }
+
START_CRIT_SECTION();
MarkBufferDirty(buf);
@@ -399,6 +422,7 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = rel->rd_node;
+ xlrec.created = create;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) tuple->t_data, tuple->t_len);
@@ -502,7 +526,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seqrel, newdatatuple);
+ fill_seq_with_data(seqrel, newdatatuple, true);
}
/* process OWNED BY if given */
@@ -766,8 +790,31 @@ nextval_internal(Oid relid, bool check_permissions)
* (Have to do that here, so we're outside the critical section)
*/
if (logit && RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX This might seem unnecessary, because if there's no XID the
+ * transaction couldn't have done anything important yet, e.g. it
+ * could not have created a sequence. But that's incorrect, as it
+ * ignores subtransactions. The current subtransaction might not
+ * have done anything yet (thus no XID), but an earlier one might
+ * have created the sequence.
+ *
+ * XXX Not sure if this is the best solution. Maybe do this only
+ * with wal_level=logical to minimize the overhead. OTOH advancing
+ * the sequence is likely followed by using the value(s) for some
+ * other activity, which assigns the XID.
+ */
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -803,6 +850,7 @@ nextval_internal(Oid relid, bool check_permissions)
seq->log_cnt = 0;
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -977,8 +1025,31 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX This might seem unnecessary, because if there's no XID the
+ * transaction couldn't have done anything important yet, e.g. it
+ * could not have created a sequence. But that's incorrect, as it
+ * ignores subtransactions. The current subtransaction might not
+ * have done anything yet (thus no XID), but an earlier one might
+ * have created the sequence.
+ *
+ * XXX Not sure if this is the best solution. Maybe do this only
+ * with wal_level=logical to minimize the overhead. OTOH advancing
+ * the sequence is likely followed by using the value(s) for some
+ * other activity, which assigns the XID.
+ */
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -999,6 +1070,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
+
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index a2b69511b4..b61f2e264a 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -42,6 +42,7 @@
#include "replication/reorderbuffer.h"
#include "replication/snapbuild.h"
#include "storage/standby.h"
+#include "commands/sequence.h"
typedef struct XLogRecordBuffer
{
@@ -74,10 +75,11 @@ static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
bool two_phase);
static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_prepare *parsed);
-
+static void DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
/* common function to decode tuples */
static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);
+static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);
/* helper functions for decoding transactions */
static inline bool FilterPrepare(LogicalDecodingContext *ctx,
@@ -158,6 +160,10 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
DecodeLogicalMsgOp(ctx, &buf);
break;
+ case RM_SEQ_ID:
+ DecodeSequence(ctx, &buf);
+ break;
+
/*
* Rmgrs irrelevant for logical decoding; they describe stuff not
* represented in logical decoding. Add new rmgrs in rmgrlist.h's
@@ -173,7 +179,6 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
case RM_HASH_ID:
case RM_GIN_ID:
case RM_GIST_ID:
- case RM_SEQ_ID:
case RM_SPGIST_ID:
case RM_BRIN_ID:
case RM_COMMIT_TS_ID:
@@ -1313,3 +1318,125 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
(txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
+
+/*
+ * Decode Sequence Tuple
+ */
+static void
+DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
+{
+ int datalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;
+
+ Assert(datalen >= 0);
+
+ tuple->tuple.t_len = datalen + SizeofHeapTupleHeader;;
+
+ ItemPointerSetInvalid(&tuple->tuple.t_self);
+
+ tuple->tuple.t_tableOid = InvalidOid;
+
+ memcpy(((char *) tuple->tuple.t_data),
+ data + sizeof(xl_seq_rec),
+ SizeofHeapTupleHeader);
+
+ memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,
+ data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,
+ datalen);
+}
+
+/*
+ * Handle sequence decode
+ *
+ * Decoding sequences is a bit tricky, because while most sequence actions
+ * are non-transactional (not subject to rollback), some need to be handled
+ * as transactional.
+ *
+ * By default, a sequence increment is non-transactional - we must not queue
+ * it in a transaction as other changes, because the transaction might get
+ * rolled back and we'd discard the increment. The downstream would not be
+ * notified about the increment, which is wrong.
+ *
+ * On the other hand, the sequence may be created in a transaction. In this
+ * case we *should* queue the change as other changes in the transaction,
+ * because we don't want to send the increments for unknown sequence to the
+ * plugin - it might get confused about which sequence it's related to etc.
+ */
+static void
+DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ SnapBuild *builder = ctx->snapshot_builder;
+ ReorderBufferTupleBuf *tuplebuf;
+ RelFileNode target_node;
+ XLogReaderState *r = buf->record;
+ char *tupledata = NULL;
+ Size tuplelen;
+ Size datalen = 0;
+ TransactionId xid = XLogRecGetXid(r);
+ uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
+ xl_seq_rec *xlrec;
+ Snapshot snapshot;
+ RepOriginId origin_id = XLogRecGetOrigin(r);
+ bool transactional;
+
+ /* only decode changes flagged with XLOG_SEQ_LOG */
+ if (info != XLOG_SEQ_LOG)
+ elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
+
+ /*
+ * If we don't have snapshot or we are just fast-forwarding, there is no
+ * point in decoding messages.
+ */
+ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||
+ ctx->fast_forward)
+ return;
+
+ /* only interested in our database */
+ XLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);
+ if (target_node.dbNode != ctx->slot->data.database)
+ return;
+
+ /* output plugin doesn't look for this origin, no need to queue */
+ if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
+ return;
+
+ tupledata = XLogRecGetData(r);
+ datalen = XLogRecGetDataLen(r);
+ tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);
+
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_seq_rec *) XLogRecGetData(r);
+
+ /* XXX how could we have sequence change without data? */
+ if(!datalen || !tupledata)
+ return;
+
+ tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
+ DecodeSeqTuple(tupledata, datalen, tuplebuf);
+
+ /*
+ * Should we handle the sequence increment as transactional or not?
+ *
+ * If the sequence was created in a still-running transaction, treat
+ * it as transactional and queue the increments. Otherwise it needs
+ * to be treated as non-transactional, in which case we send it to
+ * the plugin right away.
+ */
+ transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
+ target_node,
+ xlrec->created);
+
+ /* Skip the change if already processed (per the snapshot). */
+ if (transactional &&
+ !SnapBuildProcessChange(builder, xid, buf->origptr))
+ return;
+ else if (!transactional &&
+ (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
+ SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ return;
+
+ /* Queue the increment (or send immediately if not transactional). */
+ snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
+ ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
+ origin_id, target_node, transactional,
+ xlrec->created, tuplebuf);
+}
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index b7d9521576..811d9912a1 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -73,6 +73,10 @@ static void truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
static void message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
/* streaming callbacks */
static void stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
@@ -90,6 +94,10 @@ static void stream_change_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn
static void stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static void stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[], ReorderBufferChange *change);
@@ -217,6 +225,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->apply_truncate = truncate_cb_wrapper;
ctx->reorder->commit = commit_cb_wrapper;
ctx->reorder->message = message_cb_wrapper;
+ ctx->reorder->sequence = sequence_cb_wrapper;
/*
* To support streaming, we require start/stop/abort/commit/change
@@ -233,6 +242,7 @@ StartupDecodingContext(List *output_plugin_options,
(ctx->callbacks.stream_commit_cb != NULL) ||
(ctx->callbacks.stream_change_cb != NULL) ||
(ctx->callbacks.stream_message_cb != NULL) ||
+ (ctx->callbacks.stream_sequence_cb != NULL) ||
(ctx->callbacks.stream_truncate_cb != NULL);
/*
@@ -250,6 +260,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->stream_commit = stream_commit_cb_wrapper;
ctx->reorder->stream_change = stream_change_cb_wrapper;
ctx->reorder->stream_message = stream_message_cb_wrapper;
+ ctx->reorder->stream_sequence = stream_sequence_cb_wrapper;
ctx->reorder->stream_truncate = stream_truncate_cb_wrapper;
@@ -1202,6 +1213,43 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ if (ctx->callbacks.sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ created, last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr first_lsn)
@@ -1507,6 +1555,47 @@ stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ /* We're only supposed to call this when streaming is supported. */
+ Assert(ctx->streaming);
+
+ /* this callback is optional */
+ if (ctx->callbacks.stream_sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "stream_sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ created, last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[],
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 46e66608cf..434926459f 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -77,6 +77,35 @@
* a bit more memory to the oldest subtransactions, because it's likely
* they are the source for the next sequence of changes.
*
+ * When decoding sequences, we differentiate between a sequences created
+ * in a (running) transaction, and sequences created in other (already
+ * committed) transactions. Changes for sequences created in the same
+ * top-level transaction are treated as "transactional" i.e. just like
+ * any other change from that transaction (and discarded in case of a
+ * rollback). Changes for sequences created earlier are treated as not
+ * transactional - are processed immediately, as if performed outside
+ * any transaction (and thus not rolled back).
+ *
+ * This mixed behavior is necessary - sequences are non-transactional
+ * (e.g. ROLLBACK does not undo the sequence increments). But for new
+ * sequences, we need to handle them in a transactional way, because if
+ * we ever get some DDL support, the sequence won't exist until the
+ * transaction gets applied. So we need to ensure the increments don't
+ * happen until the sequence gets created.
+ *
+ * To differentiate which sequences are "old" and which were created
+ * in a still-running transaction, we track sequences created in running
+ * transactions in a hash table. Each sequence is identified by it's
+ * relfilenode, and at transaction commit we discard all sequences
+ * created in that particular transaction. For each sequence we track
+ * the XID of the (sub)transaction that created it, and we cleanup the
+ * sequences for each subtransaction when it completes (commit/rollback).
+ *
+ * We don't use the XID to check if it's the same top-level transaction.
+ * It's enough to know it was created in an in-progress transaction,
+ * and we know it must be the current one because otherwise it wouldn't
+ * see the sequence object.
+ *
* -------------------------------------------------------------------------
*/
#include "postgres.h"
@@ -116,6 +145,13 @@ typedef struct ReorderBufferTXNByIdEnt
ReorderBufferTXN *txn;
} ReorderBufferTXNByIdEnt;
+/* entry for hash table we use to track sequences created in running xacts */
+typedef struct ReorderBufferSequenceEnt
+{
+ RelFileNode rnode;
+ TransactionId xid;
+} ReorderBufferSequenceEnt;
+
/* data structures for (relfilenode, ctid) => (cmin, cmax) mapping */
typedef struct ReorderBufferTupleCidKey
{
@@ -339,6 +375,14 @@ ReorderBufferAllocate(void)
buffer->by_txn = hash_create("ReorderBufferByXid", 1000, &hash_ctl,
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+ /* hash table of sequences, mapping relfilenode to XID of transaction */
+ hash_ctl.keysize = sizeof(RelFileNode);
+ hash_ctl.entrysize = sizeof(ReorderBufferSequenceEnt);
+ hash_ctl.hcxt = buffer->context;
+
+ buffer->sequences = hash_create("ReorderBufferSequenceHash", 1000, &hash_ctl,
+ HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+
buffer->by_txn_last_xid = InvalidTransactionId;
buffer->by_txn_last_txn = NULL;
@@ -525,6 +569,13 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
change->data.truncate.relids = NULL;
}
break;
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
+ change->data.sequence.tuple = NULL;
+ }
+ break;
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
@@ -859,6 +910,242 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
}
}
+/*
+ * Treat the sequence increment as transactional?
+ *
+ * The hash table tracks all sequences created in in-progress transactions,
+ * so we simply do a lookup (the sequence is identified by relfilende). If
+ * we find a match, the increment should be handled as transactional.
+ */
+bool
+ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created)
+{
+ bool found = false;
+
+ if (created)
+ return true;
+
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND,
+ &found);
+
+ return found;
+}
+
+/*
+ * Cleanup sequences created in in-progress transactions.
+ *
+ * There's no way to search by XID, so we simply do a seqscan of all
+ * the entries in the hash table. Hopefully there are only a couple
+ * entries in most cases - people generally don't create many new
+ * sequences over and over.
+ */
+static void
+ReorderBufferSequenceCleanup(ReorderBuffer *rb, TransactionId xid)
+{
+ HASH_SEQ_STATUS scan_status;
+ ReorderBufferSequenceEnt *ent;
+
+ hash_seq_init(&scan_status, rb->sequences);
+ while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
+ {
+ /* skip sequences not from this transaction */
+ if (ent->xid != xid)
+ continue;
+
+ (void) hash_search(rb->sequences,
+ (void *) &(ent->rnode),
+ HASH_REMOVE, NULL);
+ }
+}
+
+/*
+ * A transactional sequence increment is queued to be processed upon commit
+ * and a non-transactional increment gets processed immediately.
+ *
+ * A sequence update may be both transactional and non-transactional. When
+ * created in a running transaction, treat it as transactional and queue
+ * the change in it. Otherwise treat it as non-transactional, so that we
+ * don't forget the increment in case of a rollback.
+ */
+void
+ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf)
+{
+ /*
+ * Change needs to be handled as transactional, because the sequence was
+ * created in a transaction that is still running. In that case all the
+ * changes need to be queued in that transaction, we must not send them
+ * to the downstream until the transaction commits.
+ *
+ * There's a bit of a trouble with subtransactions - we can't queue it
+ * into the subxact, because it might be rolled back and we'd lose the
+ * increment. We need to queue it into the same (sub)xact that created
+ * the sequence, which is why we track the XID in the hash table.
+ */
+ if (transactional)
+ {
+ MemoryContext oldcontext;
+ ReorderBufferChange *change;
+
+ /* lookup sequence by relfilenode */
+ ReorderBufferSequenceEnt *ent;
+ bool found;
+
+ /* transactional changes require a transaction */
+ Assert(xid != InvalidTransactionId);
+
+ /* search the lookup table (we ignore the return value, found is enough) */
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ created ? HASH_ENTER : HASH_FIND,
+ &found);
+
+ /*
+ * If this is the "create" increment, we must not have found any
+ * pre-existing entry in the hash table (i.e. there must not be
+ * any conflicting sequence).
+ */
+ Assert(!(created && found));
+
+ /* But we must have either created or found an existing entry. */
+ Assert(created || found);
+
+ /*
+ * When creating the sequence, remember the XID of the transaction
+ * that created id.
+ */
+ if (created)
+ ent->xid = xid;
+
+ /* XXX Maybe check that we're still in the same top-level xact? */
+
+ /* OK, allocate and queue the change */
+ oldcontext = MemoryContextSwitchTo(rb->context);
+
+ change = ReorderBufferGetChange(rb);
+
+ change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
+ change->origin_id = origin_id;
+
+ memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
+
+ change->data.sequence.created = created;
+ change->data.sequence.tuple = tuplebuf;
+
+ /* add it to the same subxact that created the sequence */
+ ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+ else
+ {
+ /*
+ * This increment is for a sequence that was not created in any
+ * running transaction, so we treat it as non-transactional and
+ * just send it to the output plugin directly.
+ */
+ ReorderBufferTXN *txn = NULL;
+ volatile Snapshot snapshot_now = snapshot;
+ bool using_subtxn;
+
+#ifdef USE_ASSERT_CHECKING
+ /* All "creates" have to be handled as transactional. */
+ Assert(!created);
+
+ /* Make sure the sequence is not in the hash table. */
+ {
+ bool found;
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND, &found);
+ Assert(!found);
+ }
+#endif
+
+ if (xid != InvalidTransactionId)
+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
+
+ /* setup snapshot to allow catalog access */
+ SetupHistoricSnapshot(snapshot_now, NULL);
+
+ /*
+ * Decoding needs access to syscaches et al., which in turn use
+ * heavyweight locks and such. Thus we need to have enough state around to
+ * keep track of those. The easiest way is to simply use a transaction
+ * internally. That also allows us to easily enforce that nothing writes
+ * to the database by checking for xid assignments.
+ *
+ * When we're called via the SQL SRF there's already a transaction
+ * started, so start an explicit subtransaction there.
+ */
+ using_subtxn = IsTransactionOrTransactionBlock();
+
+ PG_TRY();
+ {
+ Relation relation;
+ HeapTuple tuple;
+ bool isnull;
+ int64 last_value, log_cnt;
+ bool is_called;
+ Oid reloid;
+
+ if (using_subtxn)
+ BeginInternalSubTransaction("sequence");
+ else
+ StartTransactionCommand();
+
+ reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(rnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+ tuple = &tuplebuf->tuple;
+
+ /*
+ * Extract the internal sequence values, describing the state.
+ *
+ * XXX Seems a bit strange to access it directly. Maybe there's
+ * a better / more correct way?
+ */
+ last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
+ log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
+ is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+
+ rb->sequence(rb, txn, lsn, relation, transactional, created,
+ last_value, log_cnt, is_called);
+
+ RelationClose(relation);
+
+ TeardownHistoricSnapshot(false);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+ }
+ PG_CATCH();
+ {
+ TeardownHistoricSnapshot(true);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+ }
+}
+
/*
* AssertTXNLsnOrder
* Verify LSN ordering of transaction lists in the reorderbuffer
@@ -1535,6 +1822,9 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
&found);
Assert(found);
+ /* Remove sequences created in this transaction (if any). */
+ ReorderBufferSequenceCleanup(rb, txn->xid);
+
/* remove entries spilled to disk */
if (rbtxn_is_serialized(txn))
ReorderBufferRestoreCleanup(rb, txn);
@@ -1950,6 +2240,42 @@ ReorderBufferApplyMessage(ReorderBuffer *rb, ReorderBufferTXN *txn,
change->data.msg.message);
}
+/*
+ * Helper function for ReorderBufferProcessTXN for applying sequences.
+ */
+static inline void
+ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ Relation relation, ReorderBufferChange *change,
+ bool streaming)
+{
+ HeapTuple tuple;
+ bool isnull;
+ int64 last_value, log_cnt;
+ bool is_called;
+
+ tuple = &change->data.sequence.tuple->tuple;
+
+ /*
+ * Extract the internal sequence values, describing the state.
+ *
+ * XXX Seems a bit strange to access it directly. Maybe there's
+ * a better / more correct way?
+ */
+ last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
+ log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
+ is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+
+ /* Only ever called from ReorderBufferApplySequence, so transational. */
+ if (streaming)
+ rb->stream_sequence(rb, txn, change->lsn, relation, true,
+ change->data.sequence.created,
+ last_value, log_cnt, is_called);
+ else
+ rb->sequence(rb, txn, change->lsn, relation, true,
+ change->data.sequence.created,
+ last_value, log_cnt, is_called);
+}
+
/*
* Function to store the command id and snapshot at the end of the current
* stream so that we can reuse the same while sending the next stream.
@@ -2392,6 +2718,31 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
elog(ERROR, "tuplecid value in changequeue");
break;
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ Assert(snapshot_now);
+
+ reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
+ change->data.sequence.relnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+
+ if (!RelationIsValid(relation))
+ elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
+ reloid,
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ if (RelationIsLogicallyLogged(relation))
+ ReorderBufferApplySequence(rb, txn, relation, change, streaming);
+
+ RelationClose(relation);
+ break;
}
}
@@ -3776,6 +4127,39 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ char *data;
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
+ /* make sure we have enough space */
+ ReorderBufferSerializeReserve(rb, sz);
+
+ data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
+ /* might have been reallocated above */
+ ondisk = (ReorderBufferDiskChange *) rb->outbuf;
+
+ if (len)
+ {
+ memcpy(data, &tup->tuple, sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ memcpy(data, tup->tuple.t_data, len);
+ data += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4040,6 +4424,22 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4341,6 +4741,30 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ uint32 tuplelen = ((HeapTuple) data)->t_len;
+
+ change->data.sequence.tuple =
+ ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
+
+ /* restore ->tuple */
+ memcpy(&change->data.sequence.tuple->tuple, data,
+ sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ /* reset t_data pointer into the new tuplebuf */
+ change->data.sequence.tuple->tuple.t_data =
+ ReorderBufferTupleBufData(change->data.sequence.tuple);
+
+ /* restore tuple data itself */
+ memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
+ data += tuplelen;
+ }
+ break;
+
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 40544dd4c7..5919fb90ee 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -48,6 +48,7 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
typedef struct xl_seq_rec
{
RelFileNode node;
+ bool created; /* is this a CREATE SEQUENCE */
/* SEQUENCE TUPLE DATA FOLLOWS AT THE END */
} xl_seq_rec;
diff --git a/src/include/replication/output_plugin.h b/src/include/replication/output_plugin.h
index 810495ed0e..57bc13f11c 100644
--- a/src/include/replication/output_plugin.h
+++ b/src/include/replication/output_plugin.h
@@ -88,6 +88,19 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
Size message_size,
const char *message);
+/*
+ * Called for the generic logical decoding sequences.
+ */
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ bool created,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Filter changes by origin.
*/
@@ -199,6 +212,20 @@ typedef void (*LogicalDecodeStreamMessageCB) (struct LogicalDecodingContext *ctx
Size message_size,
const char *message);
+/*
+ * Called for the streaming generic logical decoding sequences from in-progress
+ * transactions.
+ */
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ bool created,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Callback for streaming truncates from in-progress transactions.
*/
@@ -219,6 +246,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
@@ -237,6 +265,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index 5b40ff75f7..63e6ed037b 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -64,7 +64,8 @@ enum ReorderBufferChangeType
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT,
- REORDER_BUFFER_CHANGE_TRUNCATE
+ REORDER_BUFFER_CHANGE_TRUNCATE,
+ REORDER_BUFFER_CHANGE_SEQUENCE
};
/* forward declaration */
@@ -158,6 +159,14 @@ typedef struct ReorderBufferChange
uint32 ninvalidations; /* Number of messages */
SharedInvalidationMessage *invalidations; /* invalidation message */
} inval;
+
+ /* Context data for Sequence changes */
+ struct
+ {
+ RelFileNode relnode;
+ bool created;
+ ReorderBufferTupleBuf *tuple;
+ } sequence;
} data;
/*
@@ -430,6 +439,15 @@ typedef void (*ReorderBufferMessageCB) (ReorderBuffer *rb,
const char *prefix, Size sz,
const char *message);
+/* sequence callback signature */
+typedef void (*ReorderBufferSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* begin prepare callback signature */
typedef void (*ReorderBufferBeginPrepareCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn);
@@ -496,6 +514,15 @@ typedef void (*ReorderBufferStreamMessageCB) (
const char *prefix, Size sz,
const char *message);
+/* stream sequence callback signature */
+typedef void (*ReorderBufferStreamSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* stream truncate callback signature */
typedef void (*ReorderBufferStreamTruncateCB) (
ReorderBuffer *rb,
@@ -511,6 +538,12 @@ struct ReorderBuffer
*/
HTAB *by_txn;
+ /*
+ * relfilenode => XID lookup table for sequences created in a transaction
+ * (also includes altered sequences, which assigns new relfilenode)
+ */
+ HTAB *sequences;
+
/*
* Transactions that could be a toplevel xact, ordered by LSN of the first
* record bearing that xid.
@@ -541,6 +574,7 @@ struct ReorderBuffer
ReorderBufferApplyTruncateCB apply_truncate;
ReorderBufferCommitCB commit;
ReorderBufferMessageCB message;
+ ReorderBufferSequenceCB sequence;
/*
* Callbacks to be called when streaming a transaction at prepare time.
@@ -560,6 +594,7 @@ struct ReorderBuffer
ReorderBufferStreamCommitCB stream_commit;
ReorderBufferStreamChangeCB stream_change;
ReorderBufferStreamMessageCB stream_message;
+ ReorderBufferStreamSequenceCB stream_sequence;
ReorderBufferStreamTruncateCB stream_truncate;
/*
@@ -635,6 +670,10 @@ void ReorderBufferQueueChange(ReorderBuffer *, TransactionId,
void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapshot, XLogRecPtr lsn,
bool transactional, const char *prefix,
Size message_size, const char *message);
+void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
@@ -682,4 +721,7 @@ void ReorderBufferSetRestartPoint(ReorderBuffer *, XLogRecPtr ptr);
void StartupReorderBuffer(void);
+bool ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created);
+
#endif
--
2.31.1
Hi,
Here's a slightly improved version of the patch, fixing a couple issues
I failed to notice. It also addresses a couple of the issues described
in the last message, although mostly to show what would need to be done.
1) Handle sequences dropped in the xact by calling try_relation_open,
and just doing nothing if not found. Otherwise we'd get a failure in
reorderbuffer, when decoding the change.
2) Fixed nextval_internal() to log the correct last_value (the one we
write into WAL).
3) Reread the sequence state in AtEOXact_Sequences, to deal with the
ordering issue described before. This makes (2) somewhat pointless,
because we just read whatever is on disk at that point. But having both
makes it easier to experiment / see what'd happen.
4) Log the stats in DefineSequence() - Without this we'd not have the
initial sequence state in the WAL, because only nextval/setval etc. do
the logging. The old approach (decoding the sequence tuple) does not
have this issue.
The (3) changes the behavior in a somewhat strange way. Consider this
case with two concurrent transactions:
T1: BEGIN;
T2: BEGIN;
T1: SELECT nextval('s') FROM generate_series(1,100) s(i);
T2: SELECT nextval('s') FROM generate_series(1,100) s(i);
T1: COMMIT;
T2: COMMIT;
The result is that both transactions have used the same sequence, and so
will re-read the state from disk. But at that point the state is exactly
the same, so we'll log the same thing twice.
There's a much deeper issue, though. The current patch only logs the
sequence if the session generated WAL when incrementing the sequence
(which happens every 32 values). But other sessions may already use
values from this range, so consider for example this:
T1: BEGIN;
T1: SELECT nextval('s') FROM generate_series(1,100) s(i);
T2: BEGIN;
T2: SELECT nextval('s');
T2: COMMIT;
T1: ROLLBACK;
Which unfortunately means T2 already used a value, but the increment may
not be logged at that time (or ever). This seems like a fatal issue,
because it means we need to log *all* sequences the transaction touches,
not just those that wrote the increment to WAL. That might still work
for large transactions consuming many sequence values, but it's pretty
inefficient for small OLTP transactions that only need one or two values
from the sequence.
So I think just decoding the sequence tuples is a better solution - for
large transactions (consuming many values from the sequence) it may be
more expensive (i.e. send more records to replica). But I doubt that
matters too much - it's likely negligible compared to other data for
large transactions.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0004-Add-support-for-decoding-sequences-to-built-20211121.patchtext/x-patch; charset=UTF-8; name=0004-Add-support-for-decoding-sequences-to-built-20211121.patchDownload
From 90d8dde242b9b88573fd09bb7ebc6da66876b674 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 19 Nov 2021 01:12:23 +0100
Subject: [PATCH 4/4] Add support for decoding sequences to built-in
replication
---
doc/src/sgml/catalogs.sgml | 71 ++++
doc/src/sgml/ref/alter_publication.sgml | 24 +-
doc/src/sgml/ref/alter_subscription.sgml | 4 +-
src/backend/catalog/pg_publication.c | 149 ++++++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 350 +++++++++++++++++++-
src/backend/commands/sequence.c | 79 +++++
src/backend/commands/subscriptioncmds.c | 272 +++++++++++++++
src/backend/executor/execReplication.c | 2 +-
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 1 +
src/backend/parser/gram.y | 32 ++
src/backend/replication/logical/proto.c | 54 +++
src/backend/replication/logical/tablesync.c | 118 ++++++-
src/backend/replication/logical/worker.c | 60 ++++
src/backend/replication/pgoutput/pgoutput.c | 86 ++++-
src/backend/utils/cache/relcache.c | 4 +-
src/bin/psql/tab-complete.c | 6 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 14 +
src/include/commands/sequence.h | 1 +
src/include/nodes/parsenodes.h | 6 +
src/include/replication/logicalproto.h | 20 ++
src/include/replication/pgoutput.h | 1 +
src/test/regress/expected/rules.out | 8 +
src/test/subscription/t/023_sequences.pl | 196 +++++++++++
26 files changed, 1542 insertions(+), 32 deletions(-)
create mode 100644 src/test/subscription/t/023_sequences.pl
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index c1d11be73f..37b9b52b29 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -9498,6 +9498,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11333,6 +11338,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index bb4ef5e5e2..4d166ad3f9 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCE IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -56,7 +58,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -123,6 +136,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 0b027cc346..8f28cf03f4 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -147,7 +147,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ replication of tables and sequences that were added to the subscribed-to publications
since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -164,7 +164,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
Specifies whether to copy pre-existing data in the publications
that are being subscribed to when the replication starts.
The default is <literal>true</literal>. (Previously-subscribed
- tables are not copied.)
+ tables and sequences are not copied.)
</para>
</listitem>
</varlistentry>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 63579b2f82..2d5bc73a96 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -54,7 +54,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -131,7 +132,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -480,6 +482,11 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ /* skip sequences here */
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ continue;
+
result = GetPubPartitionOptionRelations(result, pub_partopt,
pubrel->prrelid);
}
@@ -490,6 +497,49 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of relation oids for a publication (sequences only).
+ *
+ * This should only be used for normal publications, the FOR ALL TABLES
+ * should use GetAllSequencesPublicationRelations().
+ */
+List *
+GetPublicationSequenceRelations(Oid pubid)
+{
+ List *result;
+ Relation pubrelsrel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications associated with the relation. */
+ pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_rel_prpubid,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(pubid));
+
+ scan = systable_beginscan(pubrelsrel, PublicationRelPrrelidPrpubidIndexId,
+ true, NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Form_pg_publication_rel pubrel;
+
+ pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ }
+
+ systable_endscan(scan);
+ table_close(pubrelsrel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of publication oids for publications marked as FOR ALL TABLES.
*/
@@ -735,6 +785,46 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -757,10 +847,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -915,3 +1007,56 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ sequences = GetPublicationSequenceRelations(publication->oid);
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index eb560955cd..2c12d373c1 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -372,6 +372,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPT.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 7d4a0e95f6..e8dfb1a068 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -59,6 +60,12 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
AlterPublicationStmt *stmt);
static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static List *OpenSequenceList(List *sequences);
+static void CloseSequenceList(List *rels);
+static void PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt);
+static void PublicationDropSequences(Oid pubid, List *rels, bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
List *options,
@@ -77,6 +84,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -101,6 +109,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -123,6 +132,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -149,7 +160,9 @@ parse_publication_options(ParseState *pstate,
*/
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas,
+ List **schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -167,12 +180,23 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLE_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE_IN_SCHEMA:
+ schemaid = get_namespace_oid(pubobj->name, false);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLE_IN_CUR_SCHEMA:
@@ -186,6 +210,21 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
list_free(search_path);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE_IN_CUR_SCHEMA:
+ search_path = fetch_search_path(false);
+ if (search_path == NIL) /* nothing valid in search_path? */
+ ereport(ERROR,
+ errcode(ERRCODE_UNDEFINED_SCHEMA),
+ errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+ schemaid = linitial_oid(search_path);
+ list_free(search_path);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
default:
@@ -251,7 +290,10 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
bool publish_via_partition_root_given;
bool publish_via_partition_root;
AclResult aclresult;
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
/* must have CREATE privilege on database */
@@ -306,6 +348,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -330,26 +374,40 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
}
else
{
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
- if (list_length(schemaidlist) > 0 && !superuser())
+ if (list_length(tables_schemaidlist) > 0 && !superuser())
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES IN SCHEMA publication"));
- if (list_length(relations) > 0)
+ if (list_length(tables) > 0)
{
List *rels;
- rels = OpenTableList(relations);
- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ rels = OpenTableList(tables);
+ CheckObjSchemaNotAlreadyInPublication(rels, tables_schemaidlist,
PUBLICATIONOBJ_TABLE);
PublicationAddTables(puboid, rels, true, NULL);
CloseTableList(rels);
}
+ if (list_length(sequences) > 0)
+ {
+ List *rels;
+
+ rels = OpenSequenceList(sequences);
+ CheckObjSchemaNotAlreadyInPublication(rels, sequences_schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddTables(puboid, rels, true, NULL);
+ CloseSequenceList(rels);
+ }
+
if (list_length(schemaidlist) > 0)
{
/*
@@ -651,12 +709,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
*/
static void
CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
- List *tables, List *schemaidlist)
+ List *tables, List *tables_schemaidlist,
+ List *sequences, List *sequences_schemaidlist)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
if ((stmt->action == DEFELEM_ADD || stmt->action == DEFELEM_SET) &&
- schemaidlist && !superuser())
+ (tables_schemaidlist || sequences_schemaidlist) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to add or set schemas")));
@@ -665,13 +724,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
* Check that user is allowed to manipulate the publication tables in
* schema
*/
- if (schemaidlist && pubform->puballtables)
+ if (tables_schemaidlist && pubform->puballtables)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables from schema cannot be added to, dropped from, or set on FOR ALL TABLES publications.")));
+ /*
+ * Check that user is allowed to manipulate the publication sequences in
+ * schema
+ */
+ if (sequences_schemaidlist && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
/* Check that user is allowed to manipulate the publication tables. */
if (tables && pubform->puballtables)
ereport(ERROR,
@@ -679,6 +749,108 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (sequences && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+ List *sequences, List *schemaidlist)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /*
+ * It is quite possible that for the SET case user has not specified any
+ * tables in which case we need to remove all the existing tables.
+ */
+ if (!sequences && stmt->action != DEFELEM_SET)
+ return;
+
+ rels = OpenSequenceList(sequences);
+
+ if (stmt->action == DEFELEM_ADD)
+ {
+ List *schemas = NIL;
+
+ /*
+ * Check if the relation is member of the existing schema in the
+ * publication or member of the schema list specified.
+ */
+ schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid));
+ CheckObjSchemaNotAlreadyInPublication(rels, schemas,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddSequences(pubid, rels, false, stmt);
+ }
+ else if (stmt->action == DEFELEM_DROP)
+ PublicationDropSequences(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ PublicationRelInfo *newpubrel;
+
+ newpubrel = (PublicationRelInfo *) lfirst(newlc);
+ if (RelationGetRelid(newpubrel->relation) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+ /* Not yet in the list, open it and add to the list */
+ if (!found)
+ {
+ Relation oldrel;
+ PublicationRelInfo *pubrel;
+
+ /* Wrap relation into PublicationRelInfo */
+ oldrel = table_open(oldrelid, ShareUpdateExclusiveLock);
+
+ pubrel = palloc(sizeof(PublicationRelInfo));
+ pubrel->relation = oldrel;
+
+ delrels = lappend(delrels, pubrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropSequences(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddSequences(pubid, rels, true, stmt);
+
+ CloseSequenceList(delrels);
+ }
+
+ CloseSequenceList(rels);
}
/*
@@ -716,13 +888,21 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
AlterPublicationOptions(pstate, stmt, rel, tup);
else
{
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
- CheckAlterPublication(stmt, tup, relations, schemaidlist);
+ CheckAlterPublication(stmt, tup,
+ tables, tables_schemaidlist,
+ sequences, sequences_schemaidlist);
/*
* Lock the publication so nobody else can do anything with it. This
@@ -747,7 +927,9 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
errmsg("publication \"%s\" does not exist",
stmt->pubname));
- AlterPublicationTables(stmt, tup, relations, schemaidlist);
+ AlterPublicationTables(stmt, tup, tables, tables_schemaidlist);
+ AlterPublicationSequences(stmt, tup, sequences, sequences_schemaidlist);
+
AlterPublicationSchemas(stmt, tup, schemaidlist);
}
@@ -1155,6 +1337,144 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
}
}
+/*
+ * Open relations specified by a PublicationTable list.
+ * In the returned list of PublicationRelInfo, tables are locked
+ * in ShareUpdateExclusiveLock mode in order to add them to a publication.
+ */
+static List *
+OpenSequenceList(List *sequences)
+{
+ List *relids = NIL;
+ List *rels = NIL;
+ ListCell *lc;
+
+ /*
+ * Open, share-lock, and check all the explicitly-specified relations
+ */
+ foreach(lc, sequences)
+ {
+ PublicationTable *s = lfirst_node(PublicationTable, lc);
+ Relation rel;
+ Oid myrelid;
+ PublicationRelInfo *pub_rel;
+
+ /* Allow query cancel in case this takes a long time */
+ CHECK_FOR_INTERRUPTS();
+
+ rel = table_openrv(s->relation, ShareUpdateExclusiveLock);
+ myrelid = RelationGetRelid(rel);
+
+ /*
+ * Filter out duplicates if user specifies "foo, foo".
+ *
+ * Note that this algorithm is known to not be very efficient (O(N^2))
+ * but given that it only works on list of tables given to us by user
+ * it's deemed acceptable.
+ */
+ if (list_member_oid(relids, myrelid))
+ {
+ table_close(rel, ShareUpdateExclusiveLock);
+ continue;
+ }
+
+ pub_rel = palloc(sizeof(PublicationRelInfo));
+ pub_rel->relation = rel;
+ rels = lappend(rels, pub_rel);
+ relids = lappend_oid(relids, myrelid);
+ }
+
+ list_free(relids);
+
+ return rels;
+}
+
+/*
+ * Close all relations in the list.
+ */
+static void
+CloseSequenceList(List *rels)
+{
+ ListCell *lc;
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel;
+
+ pub_rel = (PublicationRelInfo *) lfirst(lc);
+ table_close(pub_rel->relation, NoLock);
+ }
+}
+
+/*
+ * Add listed tables to the publication.
+ */
+static void
+PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt)
+{
+ ListCell *lc;
+
+ Assert(!stmt || !stmt->for_all_sequences);
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel = (PublicationRelInfo *) lfirst(lc);
+ Relation rel = pub_rel->relation;
+ ObjectAddress obj;
+
+ /* Must be owner of the sequence or superuser. */
+ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind),
+ RelationGetRelationName(rel));
+
+ obj = publication_add_relation(pubid, pub_rel, if_not_exists);
+ if (stmt)
+ {
+ EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
+ (Node *) stmt);
+
+ InvokeObjectPostCreateHook(PublicationRelRelationId,
+ obj.objectId, 0);
+ }
+ }
+}
+
+/*
+ * Remove listed sequences from the publication.
+ */
+static void
+PublicationDropSequences(Oid pubid, List *rels, bool missing_ok)
+{
+ ObjectAddress obj;
+ ListCell *lc;
+ Oid prid;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ Oid relid = RelationGetRelid(rel);
+
+ prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
+ ObjectIdGetDatum(relid),
+ ObjectIdGetDatum(pubid));
+ if (!OidIsValid(prid))
+ {
+ if (missing_ok)
+ continue;
+
+ ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+ errmsg("relation \"%s\" is not part of the publication",
+ RelationGetRelationName(rel))));
+ }
+
+ ObjectAddressSet(obj, PublicationRelRelationId, prid);
+ performDeletion(&obj, DROP_CASCADE, 0);
+ }
+}
+
+
/*
* Internal workhorse for changing a publication owner
*/
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index e45e4042a7..c3e1825056 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -374,6 +374,85 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*/
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index c47ba26369..5086526904 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -84,6 +84,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -502,6 +503,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -540,6 +542,26 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -712,6 +734,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -803,6 +829,183 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1607,6 +1810,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 574d7d27fd..6f0d42d551 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -608,7 +608,7 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index ad1ea2ff2f..3dc5eeed91 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4843,6 +4843,7 @@ _copyCreatePublicationStmt(const CreatePublicationStmt *from)
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
COPY_SCALAR_FIELD(for_all_tables);
+ COPY_SCALAR_FIELD(for_all_sequences);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index f537d3eb96..2317185daf 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2336,6 +2336,7 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
COMPARE_SCALAR_FIELD(action);
return true;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a6d0cefa6b..5c4e8bf3d7 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -9672,6 +9672,26 @@ PublicationObjSpec:
$$->pubobjtype = PUBLICATIONOBJ_TABLE_IN_CUR_SCHEMA;
$$->location = @5;
}
+ | SEQUENCE relation_expr
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+ $$->pubtable = makeNode(PublicationTable);
+ $$->pubtable->relation = $2;
+ }
+ | ALL SEQUENCE IN_P SCHEMA ColId
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE_IN_SCHEMA;
+ $$->name = $5;
+ $$->location = @5;
+ }
+ | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE_IN_CUR_SCHEMA;
+ $$->location = @5;
+ }
| ColId
{
$$ = makeNode(PublicationObjSpec);
@@ -10007,6 +10027,12 @@ UnlistenStmt:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
@@ -10015,6 +10041,12 @@ UnlistenStmt:
* BEGIN / COMMIT / ROLLBACK
* (also older versions END / ABORT)
*
+ * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]
+ *
*****************************************************************************/
TransactionStmt:
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 9f5bf4b639..9f53b4c47b 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -647,6 +647,58 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint8(out, created);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->created = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1203,6 +1255,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index f07983a43c..24369a1522 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -357,6 +358,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -871,6 +878,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ elog(WARNING, "sequence %s info last_value %ld log_cnt %ld is_called %d",
+ quote_qualified_identifier(lrel.nspname, lrel.relname),
+ last_value, log_cnt, is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1106,10 +1206,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index ae1b391bda..55d48ce5be 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -144,6 +144,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -1091,6 +1092,61 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by ResetSequence2 */
+ elog(WARNING, "locking sequence %d in exclusive mode", relid);
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ elog(WARNING, "applying sequence %s.%s transactional %d created %d last_value %ld log_cnt %ld is_called %d",
+ seq.nspname, seq.seqname, seq.transactional, seq.created, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2374,6 +2430,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 6f6a203dea..fef568b5b2 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -49,6 +49,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -159,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -175,6 +180,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -188,6 +194,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -195,6 +202,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -260,6 +268,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -856,6 +874,52 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(rel))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, RelationGetRelid(rel));
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ rel,
+ xid,
+ sequence_lsn,
+ transactional,
+ created,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1140,7 +1204,8 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->streamed_txns = NIL;
entry->replicate_valid = false;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->publish_as_relid = InvalidOid;
entry->map = NULL; /* will be set by maybe_send_schema() if
* needed */
@@ -1160,6 +1225,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
List *schemaPubids = GetSchemaPublications(schemaId);
ListCell *lc;
Oid publish_as_relid = relid;
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1183,12 +1249,23 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1243,10 +1320,12 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
}
if (entry->pubactions.pubinsert && entry->pubactions.pubupdate &&
- entry->pubactions.pubdelete && entry->pubactions.pubtruncate)
+ entry->pubactions.pubdelete && entry->pubactions.pubtruncate &&
+ entry->pubactions.pubsequence)
break;
}
@@ -1391,6 +1470,7 @@ rel_sync_cache_publication_cb(Datum arg, int cacheid, uint32 hashvalue)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
}
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 9fa9e671a1..1c3286a38c 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5605,6 +5605,7 @@ GetRelationPublicationActions(Relation relation)
pubactions->pubupdate |= pubform->pubupdate;
pubactions->pubdelete |= pubform->pubdelete;
pubactions->pubtruncate |= pubform->pubtruncate;
+ pubactions->pubsequence |= pubform->pubsequence;
ReleaseSysCache(tup);
@@ -5613,7 +5614,8 @@ GetRelationPublicationActions(Relation relation)
* other publications.
*/
if (pubactions->pubinsert && pubactions->pubupdate &&
- pubactions->pubdelete && pubactions->pubtruncate)
+ pubactions->pubdelete && pubactions->pubtruncate &&
+ pubactions->pubsequence)
break;
}
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index fa2e19593c..a63ed8eea4 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1647,13 +1647,13 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> DROP */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
- COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY(Query_for_list_of_schemas
" AND nspname != 'pg_catalog' "
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 6412f369f1..be5e3b4d40 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11513,6 +11513,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 1ae439e6f3..0a05fb885c 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct Publication
@@ -79,6 +89,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -121,6 +132,9 @@ extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
Oid relid);
+extern List *GetAllSequencesPublicationRelations(void);
+extern List *GetPublicationSequenceRelations(Oid pubid);
+
extern bool is_publishable_relation(Relation rel);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *targetrel,
bool if_not_exists);
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index ddb36b5b17..00038fcea3 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void AtEOXact_Sequences(bool isCommit);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 067138e6b5..44e596fd12 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3652,6 +3652,10 @@ typedef enum PublicationObjSpecType
PUBLICATIONOBJ_TABLE_IN_SCHEMA, /* Tables in schema type */
PUBLICATIONOBJ_TABLE_IN_CUR_SCHEMA, /* Get the first element from
* search_path */
+ PUBLICATIONOBJ_SEQUENCE, /* Sequence type */
+ PUBLICATIONOBJ_SEQUENCE_IN_SCHEMA, /* Sequences in schema type */
+ PUBLICATIONOBJ_SEQUENCE_IN_CUR_SCHEMA, /* Get the first element from
+ * search_path */
PUBLICATIONOBJ_CONTINUATION /* Continuation of previous type */
} PublicationObjSpecType;
@@ -3671,6 +3675,7 @@ typedef struct CreatePublicationStmt
List *options; /* List of DefElem nodes */
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
} CreatePublicationStmt;
typedef struct AlterPublicationStmt
@@ -3687,6 +3692,7 @@ typedef struct AlterPublicationStmt
*/
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
DefElemAction action; /* What action to perform with the
* tables/schemas */
} AlterPublicationStmt;
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 83741dcf42..d468c3ccfb 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -60,6 +60,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -117,6 +118,19 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ bool created;
+ int64 last_value;
+ int64 log_cnt; /* XXX probably not needed? */
+ bool is_called; /* XXX probably not needed? */
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -227,6 +241,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index 0dc460fb70..97eb2a7ef1 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -28,6 +28,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 2fa00a3c29..b32a1e6729 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1451,6 +1451,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gpt(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gpt.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/subscription/t/023_sequences.pl b/src/test/subscription/t/023_sequences.pl
new file mode 100644
index 0000000000..734f428644
--- /dev/null
+++ b/src/test/subscription/t/023_sequences.pl
@@ -0,0 +1,196 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More tests => 6;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub WITH (slot_name = seq_sub_slot)"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a rolled-back transaction - should not be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the new sequence in a transaction, and roll it back - in this case
+# it should not be replicated at commit
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+$node_subscriber->stop('fast');
+$node_publisher->stop('fast');
--
2.31.1
0003-Add-support-for-decoding-sequences-to-test_-20211121.patchtext/x-patch; charset=UTF-8; name=0003-Add-support-for-decoding-sequences-to-test_-20211121.patchDownload
From b4ac63f9f8d1739366021cee293b14231f753386 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 24 Sep 2021 00:42:04 +0200
Subject: [PATCH 3/4] Add support for decoding sequences to test_decoding
---
contrib/test_decoding/Makefile | 3 +-
contrib/test_decoding/expected/sequence.out | 321 ++++++++++++++++++++
contrib/test_decoding/sql/sequence.sql | 119 ++++++++
contrib/test_decoding/test_decoding.c | 69 +++++
4 files changed, 511 insertions(+), 1 deletion(-)
create mode 100644 contrib/test_decoding/expected/sequence.out
create mode 100644 contrib/test_decoding/sql/sequence.sql
diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index 9a31e0b879..56ddc3abae 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -5,7 +5,8 @@ PGFILEDESC = "test_decoding - example of a logical decoding output plugin"
REGRESS = ddl xact rewrite toast permissions decoding_in_xact \
decoding_into_rel binary prepared replorigin time messages \
- spill slot truncate stream stats twophase twophase_stream
+ spill slot truncate stream stats twophase twophase_stream \
+ sequence
ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \
oldest_xmin snapshot_transfer subxact_without_top concurrent_stream \
twophase_snapshot
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
new file mode 100644
index 0000000000..17c88990b1
--- /dev/null
+++ b/contrib/test_decoding/expected/sequence.out
@@ -0,0 +1,321 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+ ?column?
+----------
+ init
+(1 row)
+
+CREATE SEQUENCE test_sequence;
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4
+(1 row)
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 14
+(1 row)
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4000
+(1 row)
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, true);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, false);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3500
+(1 row)
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:1 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 created:0 last_value:33 log_cnt:0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:4 log_cnt:0 is_called:1
+ COMMIT
+ sequence public.test_sequence: transactional:0 created:0 last_value:334 log_cnt:0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:14 log_cnt:32 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:14 log_cnt:0 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:4000 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 created:0 last_value:4320 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3500 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3500 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3500 log_cnt:0 is_called:0
+ sequence public.test_sequence: transactional:0 created:0 last_value:3820 log_cnt:0 is_called:1
+(24 rows)
+
+DROP SEQUENCE test_sequence;
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3002
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3003
+(1 row)
+
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 6001
+(1 row)
+
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3003 | 30 | t
+(1 row)
+
+SELECT nextval('test_table_a_seq');
+ nextval
+---------
+ 3004
+(1 row)
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+---------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 created:1 last_value:1 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_table_a_seq: transactional:0 created:0 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 created:0 last_value:3000 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 created:0 last_value:3033 log_cnt:0 is_called:1
+ BEGIN
+ COMMIT
+(8 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+--------------------------------------------------------------
+ BEGIN
+ table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ COMMIT
+(4 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ setval
+--------
+ 5000
+(1 row)
+
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3001 | 32 | t
+(1 row)
+
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+--------
+ BEGIN
+ COMMIT
+(2 rows)
+
+SELECT pg_drop_replication_slot('regression_slot');
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
new file mode 100644
index 0000000000..d8a34738f3
--- /dev/null
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -0,0 +1,119 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+
+CREATE SEQUENCE test_sequence;
+
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, true);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, false);
+SELECT nextval('test_sequence');
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+DROP SEQUENCE test_sequence;
+
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+SELECT nextval('test_table_a_seq');
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index e5cd84e85e..45765d299e 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -35,6 +35,7 @@ typedef struct
bool include_timestamp;
bool skip_empty_xacts;
bool only_local;
+ bool skip_sequences;
} TestDecodingData;
/*
@@ -76,6 +77,10 @@ static void pg_decode_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
TransactionId xid,
const char *gid);
@@ -116,6 +121,10 @@ static void pg_decode_stream_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_stream_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static void pg_decode_stream_truncate(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
int nrelations, Relation relations[],
@@ -141,6 +150,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->filter_by_origin_cb = pg_decode_filter;
cb->shutdown_cb = pg_decode_shutdown;
cb->message_cb = pg_decode_message;
+ cb->sequence_cb = pg_decode_sequence;
cb->filter_prepare_cb = pg_decode_filter_prepare;
cb->begin_prepare_cb = pg_decode_begin_prepare_txn;
cb->prepare_cb = pg_decode_prepare_txn;
@@ -153,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pg_decode_stream_commit;
cb->stream_change_cb = pg_decode_stream_change;
cb->stream_message_cb = pg_decode_stream_message;
+ cb->stream_sequence_cb = pg_decode_stream_sequence;
cb->stream_truncate_cb = pg_decode_stream_truncate;
}
@@ -175,6 +186,9 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
data->skip_empty_xacts = false;
data->only_local = false;
+ /* skip sequences by default for backwards compatibility */
+ data->skip_sequences = true;
+
ctx->output_plugin_private = data;
opt->output_type = OUTPUT_PLUGIN_TEXTUAL_OUTPUT;
@@ -265,6 +279,17 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "skip-sequences") == 0)
+ {
+
+ if (elem->arg == NULL)
+ continue; /* true by default */
+ else if (!parse_bool(strVal(elem->arg), &data->skip_sequences))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not parse value \"%s\" for parameter \"%s\"",
+ strVal(elem->arg), elem->defname)));
+ }
else
{
ereport(ERROR,
@@ -744,6 +769,28 @@ pg_decode_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d created:%d last_value:%zu log_cnt:%zu is_called:%d",
+ transactional, created, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
static void
pg_decode_stream_start(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn)
@@ -943,6 +990,28 @@ pg_decode_stream_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_stream_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "streaming sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d created:%d last_value:%zu log_cnt:%zu is_called:%d",
+ transactional, created, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* In streaming mode, we don't display the detailed information of Truncate.
* See pg_decode_stream_change.
--
2.31.1
0002-rework-20211121.patchtext/x-patch; charset=UTF-8; name=0002-rework-20211121.patchDownload
From ce26bbc1a0a568e0e2946afa962966099daca997 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Thu, 18 Nov 2021 19:48:30 +0100
Subject: [PATCH 2/4] rework
---
src/backend/access/rmgrdesc/logicalmsgdesc.c | 13 +
src/backend/access/transam/xact.c | 9 +
src/backend/commands/sequence.c | 236 ++++++++++++------
src/backend/replication/logical/decode.c | 104 +++-----
src/backend/replication/logical/message.c | 3 +-
.../replication/logical/reorderbuffer.c | 213 ++++------------
src/include/commands/sequence.h | 2 +
src/include/replication/message.h | 15 ++
src/include/replication/reorderbuffer.h | 10 +-
9 files changed, 288 insertions(+), 317 deletions(-)
diff --git a/src/backend/access/rmgrdesc/logicalmsgdesc.c b/src/backend/access/rmgrdesc/logicalmsgdesc.c
index d64ce2e7ef..c84628cd54 100644
--- a/src/backend/access/rmgrdesc/logicalmsgdesc.c
+++ b/src/backend/access/rmgrdesc/logicalmsgdesc.c
@@ -40,6 +40,16 @@ logicalmsg_desc(StringInfo buf, XLogReaderState *record)
sep = " ";
}
}
+ else if (info == XLOG_LOGICAL_SEQUENCE)
+ {
+ xl_logical_sequence *xlrec = (xl_logical_sequence *) rec;
+
+ appendStringInfo(buf, "rel %u/%u/%u last: %lu log_cnt: %lu is_called: %d",
+ xlrec->node.spcNode,
+ xlrec->node.dbNode,
+ xlrec->node.relNode,
+ xlrec->last, xlrec->log_cnt, xlrec->is_called);
+ }
}
const char *
@@ -48,5 +58,8 @@ logicalmsg_identify(uint8 info)
if ((info & ~XLR_INFO_MASK) == XLOG_LOGICAL_MESSAGE)
return "MESSAGE";
+ if ((info & ~XLR_INFO_MASK) == XLOG_LOGICAL_SEQUENCE)
+ return "SEQUENCE";
+
return NULL;
}
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index 8e35c432f5..eac8180250 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -36,6 +36,7 @@
#include "catalog/storage.h"
#include "commands/async.h"
#include "commands/tablecmds.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/spi.h"
#include "libpq/be-fsstubs.h"
@@ -2220,6 +2221,9 @@ CommitTransaction(void)
/* Prevent cancel/die interrupt while cleaning up */
HOLD_INTERRUPTS();
+ /* Write state of sequences to WAL */
+ AtEOXact_Sequences(true);
+
/* Commit updates to the relation map --- do this as late as possible */
AtEOXact_RelationMap(true, is_parallel_worker);
@@ -2378,6 +2382,8 @@ CommitTransaction(void)
* PrepareTransaction
*
* NB: if you change this routine, better look at CommitTransaction too!
+ *
+ * XXX Does this need to do something about logging of sequences?
*/
static void
PrepareTransaction(void)
@@ -2776,6 +2782,9 @@ AbortTransaction(void)
AtEOXact_RelationMap(false, is_parallel_worker);
AtAbort_Twophase();
+ /* XXX not sure we need to do anything about sequences at abort */
+ AtEOXact_Sequences(false);
+
/*
* Advertise the fact that we aborted in pg_xact (assuming that we got as
* far as assigning an XID to advertise). But if we're inside a parallel
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a98fcc2e97..e45e4042a7 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -37,6 +37,7 @@
#include "miscadmin.h"
#include "nodes/makefuncs.h"
#include "parser/parse_type.h"
+#include "replication/message.h"
#include "storage/lmgr.h"
#include "storage/proc.h"
#include "storage/smgr.h"
@@ -75,6 +76,7 @@ typedef struct SeqTableData
{
Oid relid; /* pg_class OID of this sequence (hash key) */
Oid filenode; /* last seen relfilenode of this sequence */
+ Oid tablespace; /* last seen tablespace of this sequence */
LocalTransactionId lxid; /* xact in which we last did a seq op */
bool last_valid; /* do we have a valid "last" value? */
int64 last; /* value last returned by nextval */
@@ -82,6 +84,11 @@ typedef struct SeqTableData
/* if last != cached, we have not used up all the cached values */
int64 increment; /* copy of sequence's increment field */
/* note that increment is zero until we first do nextval_internal() */
+ bool need_log; /* should be written to WAL at commit? */
+ bool touched;
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
} SeqTableData;
typedef SeqTableData *SeqTable;
@@ -94,7 +101,7 @@ static HTAB *seqhashtab = NULL; /* hash table for SeqTable items */
*/
static SeqTableData *last_used_seq = NULL;
-static void fill_seq_with_data(Relation rel, HeapTuple tuple, bool create);
+static void fill_seq_with_data(Relation rel, HeapTuple tuple);
static Relation lock_and_open_sequence(SeqTable seq);
static void create_seq_hashtable(void);
static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
@@ -222,7 +229,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
/* now initialize the sequence's data */
tuple = heap_form_tuple(tupDesc, value, null);
- fill_seq_with_data(rel, tuple, true);
+ fill_seq_with_data(rel, tuple);
/* process OWNED BY if given */
if (owned_by)
@@ -230,6 +237,31 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
table_close(rel, NoLock);
+ /*
+ * Add the new sequence to local cache, so that we log it at commit.
+ */
+ {
+ Relation seqrel;
+ SeqTable elm;
+ Buffer buf;
+ Form_pg_sequence_data seq;
+ HeapTupleData seqdatatuple;
+
+ init_sequence(seqoid, &elm, &seqrel);
+
+ seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+
+ elm->last_value = seq->last_value;
+ elm->is_called = seq->is_called;
+ elm->log_cnt = 0;
+ elm->touched = true;
+ elm->need_log = true;
+
+ relation_close(seqrel, NoLock);
+
+ UnlockReleaseBuffer(buf);
+ }
+
/* fill in pg_sequence */
rel = table_open(SequenceRelationId, RowExclusiveLock);
tupDesc = RelationGetDescr(rel);
@@ -327,12 +359,18 @@ ResetSequence(Oid seq_relid)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seq_rel, tuple, true);
+ fill_seq_with_data(seq_rel, tuple);
/* Clear local cache so that we don't think we have cached numbers */
/* Note that we do not change the currval() state */
elm->cached = elm->last;
+ /* Remember we need to write the sequence to WAL at commit. */
+ elm->need_log = true;
+ elm->is_called = false;
+ elm->log_cnt = 0;
+ elm->touched = true;
+
relation_close(seq_rel, NoLock);
}
@@ -340,7 +378,7 @@ ResetSequence(Oid seq_relid)
* Initialize a sequence's relation with the specified tuple as content
*/
static void
-fill_seq_with_data(Relation rel, HeapTuple tuple, bool create)
+fill_seq_with_data(Relation rel, HeapTuple tuple)
{
Buffer buf;
Page page;
@@ -380,27 +418,7 @@ fill_seq_with_data(Relation rel, HeapTuple tuple, bool create)
if (RelationNeedsWAL(rel))
{
GetTopTransactionId();
-
- /*
- * Ensure we have a proper XID, which will be included in the XLOG
- * record by XLogRecordAssemble. Otherwise the first nextval() in
- * a subxact (without any preceding changes) would get XID 0,
- * and it'd be impossible to decide which top xact it belongs to.
- * It'd also trigger assert in DecodeSequence.
- *
- * XXX This might seem unnecessary, because if there's no XID the
- * transaction couldn't have done anything important yet, e.g. it
- * could not have created a sequence. But that's incorrect, as it
- * ignores subtransactions. The current subtransaction might not
- * have done anything yet (thus no XID), but an earlier one might
- * have created the sequence.
- *
- * XXX Not sure if this is the best solution. Maybe do this only
- * with wal_level=logical to minimize the overhead. OTOH advancing
- * the sequence is likely followed by using the value(s) for some
- * other activity, which assigns the XID.
- */
- GetCurrentTransactionId();
+ // GetCurrentTransactionId();
}
START_CRIT_SECTION();
@@ -422,7 +440,6 @@ fill_seq_with_data(Relation rel, HeapTuple tuple, bool create)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = rel->rd_node;
- xlrec.created = create;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) tuple->t_data, tuple->t_len);
@@ -526,7 +543,11 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seqrel, newdatatuple, true);
+ fill_seq_with_data(seqrel, newdatatuple);
+
+ elm->need_log = true;
+ elm->is_called = false;
+ elm->log_cnt = 0;
}
/* process OWNED BY if given */
@@ -626,6 +647,8 @@ nextval_internal(Oid relid, bool check_permissions)
/* open and lock sequence */
init_sequence(relid, &elm, &seqrel);
+ elm->touched = true;
+
if (check_permissions &&
pg_class_aclcheck(elm->relid, GetUserId(),
ACL_USAGE | ACL_UPDATE) != ACLCHECK_OK)
@@ -792,27 +815,7 @@ nextval_internal(Oid relid, bool check_permissions)
if (logit && RelationNeedsWAL(seqrel))
{
GetTopTransactionId();
-
- /*
- * Ensure we have a proper XID, which will be included in the XLOG
- * record by XLogRecordAssemble. Otherwise the first nextval() in
- * a subxact (without any preceding changes) would get XID 0,
- * and it'd be impossible to decide which top xact it belongs to.
- * It'd also trigger assert in DecodeSequence.
- *
- * XXX This might seem unnecessary, because if there's no XID the
- * transaction couldn't have done anything important yet, e.g. it
- * could not have created a sequence. But that's incorrect, as it
- * ignores subtransactions. The current subtransaction might not
- * have done anything yet (thus no XID), but an earlier one might
- * have created the sequence.
- *
- * XXX Not sure if this is the best solution. Maybe do this only
- * with wal_level=logical to minimize the overhead. OTOH advancing
- * the sequence is likely followed by using the value(s) for some
- * other activity, which assigns the XID.
- */
- GetCurrentTransactionId();
+ // GetCurrentTransactionId();
}
/* ready to change the on-disk (or really, in-buffer) tuple */
@@ -850,7 +853,6 @@ nextval_internal(Oid relid, bool check_permissions)
seq->log_cnt = 0;
xlrec.node = seqrel->rd_node;
- xlrec.created = false;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -858,6 +860,11 @@ nextval_internal(Oid relid, bool check_permissions)
recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);
PageSetLSN(page, recptr);
+
+ elm->need_log = true;
+ elm->is_called = true;
+ elm->last_value = next;
+ elm->log_cnt = 0;
}
/* Now update sequence tuple to the intended final state */
@@ -969,6 +976,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* open and lock sequence */
init_sequence(relid, &elm, &seqrel);
+ elm->touched = true;
+
if (pg_class_aclcheck(elm->relid, GetUserId(), ACL_UPDATE) != ACLCHECK_OK)
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
@@ -1027,27 +1036,7 @@ do_setval(Oid relid, int64 next, bool iscalled)
if (RelationNeedsWAL(seqrel))
{
GetTopTransactionId();
-
- /*
- * Ensure we have a proper XID, which will be included in the XLOG
- * record by XLogRecordAssemble. Otherwise the first nextval() in
- * a subxact (without any preceding changes) would get XID 0,
- * and it'd be impossible to decide which top xact it belongs to.
- * It'd also trigger assert in DecodeSequence.
- *
- * XXX This might seem unnecessary, because if there's no XID the
- * transaction couldn't have done anything important yet, e.g. it
- * could not have created a sequence. But that's incorrect, as it
- * ignores subtransactions. The current subtransaction might not
- * have done anything yet (thus no XID), but an earlier one might
- * have created the sequence.
- *
- * XXX Not sure if this is the best solution. Maybe do this only
- * with wal_level=logical to minimize the overhead. OTOH advancing
- * the sequence is likely followed by using the value(s) for some
- * other activity, which assigns the XID.
- */
- GetCurrentTransactionId();
+ // GetCurrentTransactionId();
}
/* ready to change the on-disk (or really, in-buffer) tuple */
@@ -1070,14 +1059,17 @@ do_setval(Oid relid, int64 next, bool iscalled)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = seqrel->rd_node;
- xlrec.created = false;
-
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);
PageSetLSN(page, recptr);
+
+ elm->need_log = true;
+ elm->is_called = false;
+ elm->last_value = next;
+ elm->log_cnt = 0;
}
END_CRIT_SECTION();
@@ -1196,6 +1188,7 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
{
/* relid already filled in */
elm->filenode = InvalidOid;
+ elm->tablespace = InvalidOid;
elm->lxid = InvalidLocalTransactionId;
elm->last_valid = false;
elm->last = elm->cached = 0;
@@ -1219,6 +1212,7 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
*/
if (seqrel->rd_rel->relfilenode != elm->filenode)
{
+ elm->tablespace = seqrel->rd_rel->reltablespace;
elm->filenode = seqrel->rd_rel->relfilenode;
elm->cached = elm->last;
}
@@ -1977,6 +1971,8 @@ seq_redo(XLogReaderState *record)
/*
* Flush cached sequence information.
+ *
+ * XXX Do we need to WAL-log the entries based on need_log?
*/
void
ResetSequenceCaches(void)
@@ -2000,3 +1996,99 @@ seq_mask(char *page, BlockNumber blkno)
mask_unused_space(page);
}
+
+
+static void
+read_sequence_info(Relation seqrel, int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ Buffer buf;
+ Form_pg_sequence_data seq;
+ HeapTupleData seqdatatuple;
+
+ seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+
+ *last_value = seq->last_value;
+ *is_called = seq->is_called;
+ *log_cnt = seq->log_cnt;
+
+ UnlockReleaseBuffer(buf);
+}
+
+/* XXX Do this only for wal_level = logical, probably? */
+void
+AtEOXact_Sequences(bool isCommit)
+{
+ SeqTable entry;
+ HASH_SEQ_STATUS scan;
+
+ if (!seqhashtab)
+ return;
+
+ /*
+ * XXX If we run just nextval() that returns cached value, the xact may
+ * not have XID, but we expect it in reorderbuffer. Maybe treat that as
+ * transactional?
+ */
+ // GetTopTransactionId();
+ // GetCurrentTransactionId();
+
+ hash_seq_init(&scan, seqhashtab);
+
+ while ((entry = (SeqTable) hash_seq_search(&scan)))
+ {
+ Relation rel;
+ RelFileNode rnode;
+ xl_logical_sequence xlrec;
+
+ if (!isCommit)
+ entry->touched = false;
+
+ /*
+ * If not touched in the current transaction, don't log anything.
+ * We leave needs_log set, so that if future transactions touch
+ * the sequence we'll log it properly.
+ */
+ if (!entry->touched)
+ continue;
+
+ /*
+ * Likewise, if the sequence does not need logging, we're done.
+ */
+ if (!entry->need_log)
+ continue;
+
+ /* does the relation still exist? */
+ rel = try_relation_open(entry->relid, NoLock);
+
+ /* relation might have been dropped */
+ if (!rel)
+ continue;
+
+ /* if this is commit, we'll log the */
+ entry->need_log = false;
+ entry->touched = false;
+
+ rnode.spcNode = entry->tablespace; /* tablespace */
+ rnode.dbNode = MyDatabaseId; /* database */
+ rnode.relNode = entry->filenode; /* relation */
+
+ xlrec.node = rnode;
+ xlrec.reloid = entry->relid;
+
+ /* XXX is it good enough to log values we have in cache? seems
+ * wrong and we may need to re-read that. */
+ // xlrec.dbId = MyDatabaseId;
+ // xlrec.relId = entry->relid;
+ read_sequence_info(rel, &xlrec.last, &xlrec.log_cnt, &xlrec.is_called);
+
+ XLogBeginInsert();
+ XLogRegisterData((char *) &xlrec, SizeOfLogicalSequence);
+
+ /* allow origin filtering */
+ XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
+
+ (void) XLogInsert(RM_LOGICALMSG_ID, XLOG_LOGICAL_SEQUENCE);
+
+ relation_close(rel, NoLock);
+ }
+}
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index b61f2e264a..d29844adf9 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -59,6 +59,9 @@ static void DecodeXactOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
static void DecodeStandbyOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
static void DecodeLogicalMsgOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
+static void DecodeLogicalMessage(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
+static void DecodeLogicalSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
+
/* individual record(group)'s handlers */
static void DecodeInsert(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
static void DecodeUpdate(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
@@ -75,11 +78,9 @@ static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
bool two_phase);
static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_prepare *parsed);
-static void DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
/* common function to decode tuples */
static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);
-static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);
/* helper functions for decoding transactions */
static inline bool FilterPrepare(LogicalDecodingContext *ctx,
@@ -160,10 +161,6 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
DecodeLogicalMsgOp(ctx, &buf);
break;
- case RM_SEQ_ID:
- DecodeSequence(ctx, &buf);
- break;
-
/*
* Rmgrs irrelevant for logical decoding; they describe stuff not
* represented in logical decoding. Add new rmgrs in rmgrlist.h's
@@ -626,6 +623,27 @@ FilterByOrigin(LogicalDecodingContext *ctx, RepOriginId origin_id)
*/
static void
DecodeLogicalMsgOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ XLogReaderState *r = buf->record;
+ uint8 info = XLogRecGetInfo(r) & ~XLR_INFO_MASK;
+
+ switch (info)
+ {
+ case XLOG_LOGICAL_MESSAGE:
+ DecodeLogicalMessage(ctx, buf);
+ break;
+
+ case XLOG_LOGICAL_SEQUENCE:
+ DecodeLogicalSequence(ctx, buf);
+ break;
+
+ default:
+ elog(ERROR, "unexpected RM_LOGICALMSG_ID record type: %u", info);
+ }
+}
+
+static void
+DecodeLogicalMessage(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
{
SnapBuild *builder = ctx->snapshot_builder;
XLogReaderState *r = buf->record;
@@ -1319,31 +1337,6 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
-/*
- * Decode Sequence Tuple
- */
-static void
-DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
-{
- int datalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;
-
- Assert(datalen >= 0);
-
- tuple->tuple.t_len = datalen + SizeofHeapTupleHeader;;
-
- ItemPointerSetInvalid(&tuple->tuple.t_self);
-
- tuple->tuple.t_tableOid = InvalidOid;
-
- memcpy(((char *) tuple->tuple.t_data),
- data + sizeof(xl_seq_rec),
- SizeofHeapTupleHeader);
-
- memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,
- data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,
- datalen);
-}
-
/*
* Handle sequence decode
*
@@ -1362,24 +1355,20 @@ DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
* plugin - it might get confused about which sequence it's related to etc.
*/
static void
-DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+DecodeLogicalSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
{
SnapBuild *builder = ctx->snapshot_builder;
- ReorderBufferTupleBuf *tuplebuf;
RelFileNode target_node;
XLogReaderState *r = buf->record;
- char *tupledata = NULL;
- Size tuplelen;
- Size datalen = 0;
TransactionId xid = XLogRecGetXid(r);
uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
- xl_seq_rec *xlrec;
+ xl_logical_sequence *xlrec;
Snapshot snapshot;
RepOriginId origin_id = XLogRecGetOrigin(r);
- bool transactional;
+ bool transactional = TransactionIdIsValid(xid);
/* only decode changes flagged with XLOG_SEQ_LOG */
- if (info != XLOG_SEQ_LOG)
+ if (info != XLOG_LOGICAL_SEQUENCE)
elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
/*
@@ -1390,8 +1379,11 @@ DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
ctx->fast_forward)
return;
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_logical_sequence *) XLogRecGetData(r);
+
/* only interested in our database */
- XLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);
+ target_node = xlrec->node;
if (target_node.dbNode != ctx->slot->data.database)
return;
@@ -1399,44 +1391,18 @@ DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
return;
- tupledata = XLogRecGetData(r);
- datalen = XLogRecGetDataLen(r);
- tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);
-
- /* extract the WAL record, with "created" flag */
- xlrec = (xl_seq_rec *) XLogRecGetData(r);
-
- /* XXX how could we have sequence change without data? */
- if(!datalen || !tupledata)
- return;
-
- tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
- DecodeSeqTuple(tupledata, datalen, tuplebuf);
-
- /*
- * Should we handle the sequence increment as transactional or not?
- *
- * If the sequence was created in a still-running transaction, treat
- * it as transactional and queue the increments. Otherwise it needs
- * to be treated as non-transactional, in which case we send it to
- * the plugin right away.
- */
- transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
- target_node,
- xlrec->created);
-
/* Skip the change if already processed (per the snapshot). */
if (transactional &&
!SnapBuildProcessChange(builder, xid, buf->origptr))
return;
else if (!transactional &&
(SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
- SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ SnapBuildXactNeedsSkip(builder, buf->origptr)))
return;
/* Queue the increment (or send immediately if not transactional). */
snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
- origin_id, target_node, transactional,
- xlrec->created, tuplebuf);
+ origin_id, xlrec->reloid, target_node,
+ xlrec->last, xlrec->log_cnt, xlrec->is_called);
}
diff --git a/src/backend/replication/logical/message.c b/src/backend/replication/logical/message.c
index 93bd372421..3f0ac57fc6 100644
--- a/src/backend/replication/logical/message.c
+++ b/src/backend/replication/logical/message.c
@@ -81,7 +81,8 @@ logicalmsg_redo(XLogReaderState *record)
{
uint8 info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
- if (info != XLOG_LOGICAL_MESSAGE)
+ if (info != XLOG_LOGICAL_MESSAGE &&
+ info != XLOG_LOGICAL_SEQUENCE)
elog(PANIC, "logicalmsg_redo: unknown op code %u", info);
/* This is only interesting for logical decoding, see decode.c. */
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 434926459f..1d9a39ac4b 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -569,17 +569,11 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
change->data.truncate.relids = NULL;
}
break;
- case REORDER_BUFFER_CHANGE_SEQUENCE:
- if (change->data.sequence.tuple)
- {
- ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
- change->data.sequence.tuple = NULL;
- }
- break;
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
break;
}
@@ -973,9 +967,10 @@ ReorderBufferSequenceCleanup(ReorderBuffer *rb, TransactionId xid)
void
ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
- RelFileNode rnode, bool transactional, bool created,
- ReorderBufferTupleBuf *tuplebuf)
+ Oid reloid, RelFileNode rnode, int64 last, int64 log_cnt, bool is_called)
{
+ bool transactional = TransactionIdIsValid(xid);
+
/*
* Change needs to be handled as transactional, because the sequence was
* created in a transaction that is still running. In that case all the
@@ -992,38 +987,6 @@ ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
MemoryContext oldcontext;
ReorderBufferChange *change;
- /* lookup sequence by relfilenode */
- ReorderBufferSequenceEnt *ent;
- bool found;
-
- /* transactional changes require a transaction */
- Assert(xid != InvalidTransactionId);
-
- /* search the lookup table (we ignore the return value, found is enough) */
- ent = hash_search(rb->sequences,
- (void *) &rnode,
- created ? HASH_ENTER : HASH_FIND,
- &found);
-
- /*
- * If this is the "create" increment, we must not have found any
- * pre-existing entry in the hash table (i.e. there must not be
- * any conflicting sequence).
- */
- Assert(!(created && found));
-
- /* But we must have either created or found an existing entry. */
- Assert(created || found);
-
- /*
- * When creating the sequence, remember the XID of the transaction
- * that created id.
- */
- if (created)
- ent->xid = xid;
-
- /* XXX Maybe check that we're still in the same top-level xact? */
-
/* OK, allocate and queue the change */
oldcontext = MemoryContextSwitchTo(rb->context);
@@ -1034,38 +997,26 @@ ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
- change->data.sequence.created = created;
- change->data.sequence.tuple = tuplebuf;
+ change->data.sequence.reloid = reloid;
+ change->data.sequence.last = last;
+ change->data.sequence.log_cnt = log_cnt;
+ change->data.sequence.is_called = is_called;
/* add it to the same subxact that created the sequence */
- ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+ ReorderBufferQueueChange(rb, xid, lsn, change, false);
MemoryContextSwitchTo(oldcontext);
}
else
{
/*
- * This increment is for a sequence that was not created in any
- * running transaction, so we treat it as non-transactional and
- * just send it to the output plugin directly.
- */
+ * This increment is for a sequence that was not created in any
+ * running transaction, so we treat it as non-transactional and
+ * just send it to the output plugin directly.
+ */
ReorderBufferTXN *txn = NULL;
volatile Snapshot snapshot_now = snapshot;
- bool using_subtxn;
-
-#ifdef USE_ASSERT_CHECKING
- /* All "creates" have to be handled as transactional. */
- Assert(!created);
-
- /* Make sure the sequence is not in the hash table. */
- {
- bool found;
- hash_search(rb->sequences,
- (void *) &rnode,
- HASH_FIND, &found);
- Assert(!found);
- }
-#endif
+ bool using_subtxn;
if (xid != InvalidTransactionId)
txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
@@ -1074,40 +1025,35 @@ ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
SetupHistoricSnapshot(snapshot_now, NULL);
/*
- * Decoding needs access to syscaches et al., which in turn use
- * heavyweight locks and such. Thus we need to have enough state around to
- * keep track of those. The easiest way is to simply use a transaction
- * internally. That also allows us to easily enforce that nothing writes
- * to the database by checking for xid assignments.
- *
- * When we're called via the SQL SRF there's already a transaction
- * started, so start an explicit subtransaction there.
- */
+ * Decoding needs access to syscaches et al., which in turn use
+ * heavyweight locks and such. Thus we need to have enough state around to
+ * keep track of those. The easiest way is to simply use a transaction
+ * internally. That also allows us to easily enforce that nothing writes
+ * to the database by checking for xid assignments.
+ *
+ * When we're called via the SQL SRF there's already a transaction
+ * started, so start an explicit subtransaction there.
+ */
using_subtxn = IsTransactionOrTransactionBlock();
PG_TRY();
{
Relation relation;
- HeapTuple tuple;
- bool isnull;
- int64 last_value, log_cnt;
- bool is_called;
- Oid reloid;
+// Oid reloid;
if (using_subtxn)
BeginInternalSubTransaction("sequence");
else
StartTransactionCommand();
-
+/*
reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
if (reloid == InvalidOid)
elog(ERROR, "could not map filenode \"%s\" to relation OID",
- relpathperm(rnode,
- MAIN_FORKNUM));
-
+ relpathperm(rnode,
+ MAIN_FORKNUM));
+*/
relation = RelationIdGetRelation(reloid);
- tuple = &tuplebuf->tuple;
/*
* Extract the internal sequence values, describing the state.
@@ -1115,12 +1061,8 @@ ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
* XXX Seems a bit strange to access it directly. Maybe there's
* a better / more correct way?
*/
- last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
- log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
- is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
-
- rb->sequence(rb, txn, lsn, relation, transactional, created,
- last_value, log_cnt, is_called);
+ rb->sequence(rb, txn, lsn, relation, transactional, false,
+ last, log_cnt, is_called);
RelationClose(relation);
@@ -2248,31 +2190,27 @@ ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
Relation relation, ReorderBufferChange *change,
bool streaming)
{
- HeapTuple tuple;
- bool isnull;
int64 last_value, log_cnt;
bool is_called;
- tuple = &change->data.sequence.tuple->tuple;
-
/*
* Extract the internal sequence values, describing the state.
*
* XXX Seems a bit strange to access it directly. Maybe there's
* a better / more correct way?
*/
- last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
- log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
- is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+ last_value = change->data.sequence.last;
+ log_cnt = change->data.sequence.log_cnt;
+ is_called = change->data.sequence.is_called;
/* Only ever called from ReorderBufferApplySequence, so transational. */
if (streaming)
rb->stream_sequence(rb, txn, change->lsn, relation, true,
- change->data.sequence.created,
+ false, /* FIXME */
last_value, log_cnt, is_called);
else
rb->sequence(rb, txn, change->lsn, relation, true,
- change->data.sequence.created,
+ false, /* FIXME */
last_value, log_cnt, is_called);
}
@@ -2722,15 +2660,17 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_SEQUENCE:
Assert(snapshot_now);
+ /*
reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
change->data.sequence.relnode.relNode);
if (reloid == InvalidOid)
- elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ elog(ERROR, "zz could not map filenode \"%s\" to relation OID",
relpathperm(change->data.sequence.relnode,
MAIN_FORKNUM));
- relation = RelationIdGetRelation(reloid);
+ */
+ relation = RelationIdGetRelation(change->data.sequence.reloid);
if (!RelationIsValid(relation))
elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
@@ -4127,45 +4067,13 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
- break;
- }
- case REORDER_BUFFER_CHANGE_SEQUENCE:
- {
- char *data;
- ReorderBufferTupleBuf *tup;
- Size len = 0;
-
- tup = change->data.sequence.tuple;
-
- if (tup)
- {
- sz += sizeof(HeapTupleData);
- len = tup->tuple.t_len;
- sz += len;
- }
-
- /* make sure we have enough space */
- ReorderBufferSerializeReserve(rb, sz);
-
- data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
- /* might have been reallocated above */
- ondisk = (ReorderBufferDiskChange *) rb->outbuf;
-
- if (len)
- {
- memcpy(data, &tup->tuple, sizeof(HeapTupleData));
- data += sizeof(HeapTupleData);
-
- memcpy(data, tup->tuple.t_data, len);
- data += len;
- }
-
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
/* ReorderBufferChange contains everything important */
break;
}
@@ -4424,28 +4332,13 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
- break;
- }
- case REORDER_BUFFER_CHANGE_SEQUENCE:
- {
- ReorderBufferTupleBuf *tup;
- Size len = 0;
-
- tup = change->data.sequence.tuple;
-
- if (tup)
- {
- sz += sizeof(HeapTupleData);
- len = tup->tuple.t_len;
- sz += len;
- }
-
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
/* ReorderBufferChange contains everything important */
break;
}
@@ -4742,33 +4635,11 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
- case REORDER_BUFFER_CHANGE_SEQUENCE:
- if (change->data.sequence.tuple)
- {
- uint32 tuplelen = ((HeapTuple) data)->t_len;
-
- change->data.sequence.tuple =
- ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
-
- /* restore ->tuple */
- memcpy(&change->data.sequence.tuple->tuple, data,
- sizeof(HeapTupleData));
- data += sizeof(HeapTupleData);
-
- /* reset t_data pointer into the new tuplebuf */
- change->data.sequence.tuple->tuple.t_data =
- ReorderBufferTupleBufData(change->data.sequence.tuple);
-
- /* restore tuple data itself */
- memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
- data += tuplelen;
- }
- break;
-
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
break;
}
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 5919fb90ee..ddb36b5b17 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -62,6 +62,8 @@ extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
extern void ResetSequenceCaches(void);
+extern void AtEOXact_Sequences(bool isCommit);
+
extern void seq_redo(XLogReaderState *rptr);
extern void seq_desc(StringInfo buf, XLogReaderState *rptr);
extern const char *seq_identify(uint8 info);
diff --git a/src/include/replication/message.h b/src/include/replication/message.h
index d3fb324c81..a39e8e984d 100644
--- a/src/include/replication/message.h
+++ b/src/include/replication/message.h
@@ -27,13 +27,28 @@ typedef struct xl_logical_message
char message[FLEXIBLE_ARRAY_MEMBER];
} xl_logical_message;
+/*
+ * Generic logical decoding sequence wal record.
+ */
+typedef struct xl_logical_sequence
+{
+ RelFileNode node;
+ Oid reloid;
+ int64 last; /* last value emitted for sequence */
+ int64 log_cnt; /* last value cached for sequence */
+ bool is_called;
+} xl_logical_sequence;
+
#define SizeOfLogicalMessage (offsetof(xl_logical_message, message))
+#define SizeOfLogicalSequence (sizeof(xl_logical_sequence))
extern XLogRecPtr LogLogicalMessage(const char *prefix, const char *message,
size_t size, bool transactional);
/* RMGR API*/
#define XLOG_LOGICAL_MESSAGE 0x00
+#define XLOG_LOGICAL_SEQUENCE 0x10
+
void logicalmsg_redo(XLogReaderState *record);
void logicalmsg_desc(StringInfo buf, XLogReaderState *record);
const char *logicalmsg_identify(uint8 info);
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index 63e6ed037b..eb78d5dff4 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -164,8 +164,10 @@ typedef struct ReorderBufferChange
struct
{
RelFileNode relnode;
- bool created;
- ReorderBufferTupleBuf *tuple;
+ Oid reloid;
+ int64 last;
+ int64 log_cnt;
+ bool is_called;
} sequence;
} data;
@@ -672,8 +674,8 @@ void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapsho
Size message_size, const char *message);
void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
- RelFileNode rnode, bool transactional, bool created,
- ReorderBufferTupleBuf *tuplebuf);
+ Oid reloid, RelFileNode rnode,
+ int64 last, int64 log_cnt, bool is_called);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
--
2.31.1
0001-Logical-decoding-of-sequences-20211121.patchtext/x-patch; charset=UTF-8; name=0001-Logical-decoding-of-sequences-20211121.patchDownload
From c05f9777d843e7176b02c519444ed23650d39eb6 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 24 Sep 2021 00:41:33 +0200
Subject: [PATCH 1/4] Logical decoding of sequences
This adds the infrastructure for logical decoding of sequences. Support
for built-in logical replication and test_decoding is added separately.
When decoding sequences, we differentiate between a sequences created in
a (running) transaction, and sequences created in other (already
committed) transactions. Changes for sequences created in the same
top-level transaction are treated as "transactional" i.e. just like any
other change from that transaction (and discarded in case of a
rollback). Changes for sequences created earlier are treated as not
transactional - are processed immediately, as if performed outside any
transaction (and thus not rolled back).
This mixed behavior is necessary - sequences are non-transactional (e.g.
ROLLBACK does not undo the sequence increments). But for new sequences,
we need to handle them in a transactional way, because if we ever get
some DDL support, the sequence won't exist until the transaction gets
applied. So we need to ensure the increments don't happen until the
sequence gets created.
To differentiate which sequences are "old" and which were created in a
still-running transaction, we track sequences created in running
transactions in a hash table. Each sequence is identified by it's
relfilenode, and at transaction commit we discard all sequences created
in that particular transaction. For each sequence we track the XID of
the (sub)transaction that created it, and we cleanup the sequences for
each subtransaction when it completes (commit/rollback).
We don't use the XID to check if it's the same top-level transaction.
It's enough to know it was created in an in-progress transaction, and we
know it must be the current one because otherwise it wouldn't see the
sequence object.
The main changes in this patch are:
1) ensure WAL-logging of all necessary info for sequence advances
We need to be able to associate the advance with a XID, but until now
sequence advance might have XID 0 if it was the first thing the
transaction did. So we ensure the transaction has XID.
Note: This is needed because of subxacts. A XID 0 might still have the
sequence created in a different subxact of the same top-level xact.
Then there's a "created" flag added to the xl_seq_rec, which is
necessary to differentiate WAL for created/reset sequences.
2) decoding / queueing of sequences into ReorderBuffer
This is mostly copy-paste of existing code for other decoded events.
3) tracking sequences created in in-progress transactions
We use a simple hash table, indexed by relfilenode.
---
src/backend/commands/sequence.c | 83 +++-
src/backend/replication/logical/decode.c | 131 +++++-
src/backend/replication/logical/logical.c | 89 ++++
.../replication/logical/reorderbuffer.c | 424 ++++++++++++++++++
src/include/commands/sequence.h | 1 +
src/include/replication/output_plugin.h | 29 ++
src/include/replication/reorderbuffer.h | 44 +-
7 files changed, 793 insertions(+), 8 deletions(-)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 72bfdc07a4..a98fcc2e97 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -94,7 +94,7 @@ static HTAB *seqhashtab = NULL; /* hash table for SeqTable items */
*/
static SeqTableData *last_used_seq = NULL;
-static void fill_seq_with_data(Relation rel, HeapTuple tuple);
+static void fill_seq_with_data(Relation rel, HeapTuple tuple, bool create);
static Relation lock_and_open_sequence(SeqTable seq);
static void create_seq_hashtable(void);
static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
@@ -222,7 +222,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
/* now initialize the sequence's data */
tuple = heap_form_tuple(tupDesc, value, null);
- fill_seq_with_data(rel, tuple);
+ fill_seq_with_data(rel, tuple, true);
/* process OWNED BY if given */
if (owned_by)
@@ -327,7 +327,7 @@ ResetSequence(Oid seq_relid)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seq_rel, tuple);
+ fill_seq_with_data(seq_rel, tuple, true);
/* Clear local cache so that we don't think we have cached numbers */
/* Note that we do not change the currval() state */
@@ -340,7 +340,7 @@ ResetSequence(Oid seq_relid)
* Initialize a sequence's relation with the specified tuple as content
*/
static void
-fill_seq_with_data(Relation rel, HeapTuple tuple)
+fill_seq_with_data(Relation rel, HeapTuple tuple, bool create)
{
Buffer buf;
Page page;
@@ -378,8 +378,31 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(rel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX This might seem unnecessary, because if there's no XID the
+ * transaction couldn't have done anything important yet, e.g. it
+ * could not have created a sequence. But that's incorrect, as it
+ * ignores subtransactions. The current subtransaction might not
+ * have done anything yet (thus no XID), but an earlier one might
+ * have created the sequence.
+ *
+ * XXX Not sure if this is the best solution. Maybe do this only
+ * with wal_level=logical to minimize the overhead. OTOH advancing
+ * the sequence is likely followed by using the value(s) for some
+ * other activity, which assigns the XID.
+ */
+ GetCurrentTransactionId();
+ }
+
START_CRIT_SECTION();
MarkBufferDirty(buf);
@@ -399,6 +422,7 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = rel->rd_node;
+ xlrec.created = create;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) tuple->t_data, tuple->t_len);
@@ -502,7 +526,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seqrel, newdatatuple);
+ fill_seq_with_data(seqrel, newdatatuple, true);
}
/* process OWNED BY if given */
@@ -766,8 +790,31 @@ nextval_internal(Oid relid, bool check_permissions)
* (Have to do that here, so we're outside the critical section)
*/
if (logit && RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX This might seem unnecessary, because if there's no XID the
+ * transaction couldn't have done anything important yet, e.g. it
+ * could not have created a sequence. But that's incorrect, as it
+ * ignores subtransactions. The current subtransaction might not
+ * have done anything yet (thus no XID), but an earlier one might
+ * have created the sequence.
+ *
+ * XXX Not sure if this is the best solution. Maybe do this only
+ * with wal_level=logical to minimize the overhead. OTOH advancing
+ * the sequence is likely followed by using the value(s) for some
+ * other activity, which assigns the XID.
+ */
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -803,6 +850,7 @@ nextval_internal(Oid relid, bool check_permissions)
seq->log_cnt = 0;
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -977,8 +1025,31 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ /*
+ * Ensure we have a proper XID, which will be included in the XLOG
+ * record by XLogRecordAssemble. Otherwise the first nextval() in
+ * a subxact (without any preceding changes) would get XID 0,
+ * and it'd be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX This might seem unnecessary, because if there's no XID the
+ * transaction couldn't have done anything important yet, e.g. it
+ * could not have created a sequence. But that's incorrect, as it
+ * ignores subtransactions. The current subtransaction might not
+ * have done anything yet (thus no XID), but an earlier one might
+ * have created the sequence.
+ *
+ * XXX Not sure if this is the best solution. Maybe do this only
+ * with wal_level=logical to minimize the overhead. OTOH advancing
+ * the sequence is likely followed by using the value(s) for some
+ * other activity, which assigns the XID.
+ */
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -999,6 +1070,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
+
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index a2b69511b4..b61f2e264a 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -42,6 +42,7 @@
#include "replication/reorderbuffer.h"
#include "replication/snapbuild.h"
#include "storage/standby.h"
+#include "commands/sequence.h"
typedef struct XLogRecordBuffer
{
@@ -74,10 +75,11 @@ static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
bool two_phase);
static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_prepare *parsed);
-
+static void DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
/* common function to decode tuples */
static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);
+static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);
/* helper functions for decoding transactions */
static inline bool FilterPrepare(LogicalDecodingContext *ctx,
@@ -158,6 +160,10 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
DecodeLogicalMsgOp(ctx, &buf);
break;
+ case RM_SEQ_ID:
+ DecodeSequence(ctx, &buf);
+ break;
+
/*
* Rmgrs irrelevant for logical decoding; they describe stuff not
* represented in logical decoding. Add new rmgrs in rmgrlist.h's
@@ -173,7 +179,6 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
case RM_HASH_ID:
case RM_GIN_ID:
case RM_GIST_ID:
- case RM_SEQ_ID:
case RM_SPGIST_ID:
case RM_BRIN_ID:
case RM_COMMIT_TS_ID:
@@ -1313,3 +1318,125 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
(txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
+
+/*
+ * Decode Sequence Tuple
+ */
+static void
+DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
+{
+ int datalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;
+
+ Assert(datalen >= 0);
+
+ tuple->tuple.t_len = datalen + SizeofHeapTupleHeader;;
+
+ ItemPointerSetInvalid(&tuple->tuple.t_self);
+
+ tuple->tuple.t_tableOid = InvalidOid;
+
+ memcpy(((char *) tuple->tuple.t_data),
+ data + sizeof(xl_seq_rec),
+ SizeofHeapTupleHeader);
+
+ memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,
+ data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,
+ datalen);
+}
+
+/*
+ * Handle sequence decode
+ *
+ * Decoding sequences is a bit tricky, because while most sequence actions
+ * are non-transactional (not subject to rollback), some need to be handled
+ * as transactional.
+ *
+ * By default, a sequence increment is non-transactional - we must not queue
+ * it in a transaction as other changes, because the transaction might get
+ * rolled back and we'd discard the increment. The downstream would not be
+ * notified about the increment, which is wrong.
+ *
+ * On the other hand, the sequence may be created in a transaction. In this
+ * case we *should* queue the change as other changes in the transaction,
+ * because we don't want to send the increments for unknown sequence to the
+ * plugin - it might get confused about which sequence it's related to etc.
+ */
+static void
+DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ SnapBuild *builder = ctx->snapshot_builder;
+ ReorderBufferTupleBuf *tuplebuf;
+ RelFileNode target_node;
+ XLogReaderState *r = buf->record;
+ char *tupledata = NULL;
+ Size tuplelen;
+ Size datalen = 0;
+ TransactionId xid = XLogRecGetXid(r);
+ uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
+ xl_seq_rec *xlrec;
+ Snapshot snapshot;
+ RepOriginId origin_id = XLogRecGetOrigin(r);
+ bool transactional;
+
+ /* only decode changes flagged with XLOG_SEQ_LOG */
+ if (info != XLOG_SEQ_LOG)
+ elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
+
+ /*
+ * If we don't have snapshot or we are just fast-forwarding, there is no
+ * point in decoding messages.
+ */
+ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||
+ ctx->fast_forward)
+ return;
+
+ /* only interested in our database */
+ XLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);
+ if (target_node.dbNode != ctx->slot->data.database)
+ return;
+
+ /* output plugin doesn't look for this origin, no need to queue */
+ if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
+ return;
+
+ tupledata = XLogRecGetData(r);
+ datalen = XLogRecGetDataLen(r);
+ tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);
+
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_seq_rec *) XLogRecGetData(r);
+
+ /* XXX how could we have sequence change without data? */
+ if(!datalen || !tupledata)
+ return;
+
+ tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
+ DecodeSeqTuple(tupledata, datalen, tuplebuf);
+
+ /*
+ * Should we handle the sequence increment as transactional or not?
+ *
+ * If the sequence was created in a still-running transaction, treat
+ * it as transactional and queue the increments. Otherwise it needs
+ * to be treated as non-transactional, in which case we send it to
+ * the plugin right away.
+ */
+ transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
+ target_node,
+ xlrec->created);
+
+ /* Skip the change if already processed (per the snapshot). */
+ if (transactional &&
+ !SnapBuildProcessChange(builder, xid, buf->origptr))
+ return;
+ else if (!transactional &&
+ (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
+ SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ return;
+
+ /* Queue the increment (or send immediately if not transactional). */
+ snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
+ ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
+ origin_id, target_node, transactional,
+ xlrec->created, tuplebuf);
+}
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index b7d9521576..811d9912a1 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -73,6 +73,10 @@ static void truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
static void message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
/* streaming callbacks */
static void stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
@@ -90,6 +94,10 @@ static void stream_change_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn
static void stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static void stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[], ReorderBufferChange *change);
@@ -217,6 +225,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->apply_truncate = truncate_cb_wrapper;
ctx->reorder->commit = commit_cb_wrapper;
ctx->reorder->message = message_cb_wrapper;
+ ctx->reorder->sequence = sequence_cb_wrapper;
/*
* To support streaming, we require start/stop/abort/commit/change
@@ -233,6 +242,7 @@ StartupDecodingContext(List *output_plugin_options,
(ctx->callbacks.stream_commit_cb != NULL) ||
(ctx->callbacks.stream_change_cb != NULL) ||
(ctx->callbacks.stream_message_cb != NULL) ||
+ (ctx->callbacks.stream_sequence_cb != NULL) ||
(ctx->callbacks.stream_truncate_cb != NULL);
/*
@@ -250,6 +260,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->stream_commit = stream_commit_cb_wrapper;
ctx->reorder->stream_change = stream_change_cb_wrapper;
ctx->reorder->stream_message = stream_message_cb_wrapper;
+ ctx->reorder->stream_sequence = stream_sequence_cb_wrapper;
ctx->reorder->stream_truncate = stream_truncate_cb_wrapper;
@@ -1202,6 +1213,43 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ if (ctx->callbacks.sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ created, last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr first_lsn)
@@ -1507,6 +1555,47 @@ stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ /* We're only supposed to call this when streaming is supported. */
+ Assert(ctx->streaming);
+
+ /* this callback is optional */
+ if (ctx->callbacks.stream_sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "stream_sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ created, last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[],
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 46e66608cf..434926459f 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -77,6 +77,35 @@
* a bit more memory to the oldest subtransactions, because it's likely
* they are the source for the next sequence of changes.
*
+ * When decoding sequences, we differentiate between a sequences created
+ * in a (running) transaction, and sequences created in other (already
+ * committed) transactions. Changes for sequences created in the same
+ * top-level transaction are treated as "transactional" i.e. just like
+ * any other change from that transaction (and discarded in case of a
+ * rollback). Changes for sequences created earlier are treated as not
+ * transactional - are processed immediately, as if performed outside
+ * any transaction (and thus not rolled back).
+ *
+ * This mixed behavior is necessary - sequences are non-transactional
+ * (e.g. ROLLBACK does not undo the sequence increments). But for new
+ * sequences, we need to handle them in a transactional way, because if
+ * we ever get some DDL support, the sequence won't exist until the
+ * transaction gets applied. So we need to ensure the increments don't
+ * happen until the sequence gets created.
+ *
+ * To differentiate which sequences are "old" and which were created
+ * in a still-running transaction, we track sequences created in running
+ * transactions in a hash table. Each sequence is identified by it's
+ * relfilenode, and at transaction commit we discard all sequences
+ * created in that particular transaction. For each sequence we track
+ * the XID of the (sub)transaction that created it, and we cleanup the
+ * sequences for each subtransaction when it completes (commit/rollback).
+ *
+ * We don't use the XID to check if it's the same top-level transaction.
+ * It's enough to know it was created in an in-progress transaction,
+ * and we know it must be the current one because otherwise it wouldn't
+ * see the sequence object.
+ *
* -------------------------------------------------------------------------
*/
#include "postgres.h"
@@ -116,6 +145,13 @@ typedef struct ReorderBufferTXNByIdEnt
ReorderBufferTXN *txn;
} ReorderBufferTXNByIdEnt;
+/* entry for hash table we use to track sequences created in running xacts */
+typedef struct ReorderBufferSequenceEnt
+{
+ RelFileNode rnode;
+ TransactionId xid;
+} ReorderBufferSequenceEnt;
+
/* data structures for (relfilenode, ctid) => (cmin, cmax) mapping */
typedef struct ReorderBufferTupleCidKey
{
@@ -339,6 +375,14 @@ ReorderBufferAllocate(void)
buffer->by_txn = hash_create("ReorderBufferByXid", 1000, &hash_ctl,
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+ /* hash table of sequences, mapping relfilenode to XID of transaction */
+ hash_ctl.keysize = sizeof(RelFileNode);
+ hash_ctl.entrysize = sizeof(ReorderBufferSequenceEnt);
+ hash_ctl.hcxt = buffer->context;
+
+ buffer->sequences = hash_create("ReorderBufferSequenceHash", 1000, &hash_ctl,
+ HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+
buffer->by_txn_last_xid = InvalidTransactionId;
buffer->by_txn_last_txn = NULL;
@@ -525,6 +569,13 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
change->data.truncate.relids = NULL;
}
break;
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
+ change->data.sequence.tuple = NULL;
+ }
+ break;
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
@@ -859,6 +910,242 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
}
}
+/*
+ * Treat the sequence increment as transactional?
+ *
+ * The hash table tracks all sequences created in in-progress transactions,
+ * so we simply do a lookup (the sequence is identified by relfilende). If
+ * we find a match, the increment should be handled as transactional.
+ */
+bool
+ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created)
+{
+ bool found = false;
+
+ if (created)
+ return true;
+
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND,
+ &found);
+
+ return found;
+}
+
+/*
+ * Cleanup sequences created in in-progress transactions.
+ *
+ * There's no way to search by XID, so we simply do a seqscan of all
+ * the entries in the hash table. Hopefully there are only a couple
+ * entries in most cases - people generally don't create many new
+ * sequences over and over.
+ */
+static void
+ReorderBufferSequenceCleanup(ReorderBuffer *rb, TransactionId xid)
+{
+ HASH_SEQ_STATUS scan_status;
+ ReorderBufferSequenceEnt *ent;
+
+ hash_seq_init(&scan_status, rb->sequences);
+ while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
+ {
+ /* skip sequences not from this transaction */
+ if (ent->xid != xid)
+ continue;
+
+ (void) hash_search(rb->sequences,
+ (void *) &(ent->rnode),
+ HASH_REMOVE, NULL);
+ }
+}
+
+/*
+ * A transactional sequence increment is queued to be processed upon commit
+ * and a non-transactional increment gets processed immediately.
+ *
+ * A sequence update may be both transactional and non-transactional. When
+ * created in a running transaction, treat it as transactional and queue
+ * the change in it. Otherwise treat it as non-transactional, so that we
+ * don't forget the increment in case of a rollback.
+ */
+void
+ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf)
+{
+ /*
+ * Change needs to be handled as transactional, because the sequence was
+ * created in a transaction that is still running. In that case all the
+ * changes need to be queued in that transaction, we must not send them
+ * to the downstream until the transaction commits.
+ *
+ * There's a bit of a trouble with subtransactions - we can't queue it
+ * into the subxact, because it might be rolled back and we'd lose the
+ * increment. We need to queue it into the same (sub)xact that created
+ * the sequence, which is why we track the XID in the hash table.
+ */
+ if (transactional)
+ {
+ MemoryContext oldcontext;
+ ReorderBufferChange *change;
+
+ /* lookup sequence by relfilenode */
+ ReorderBufferSequenceEnt *ent;
+ bool found;
+
+ /* transactional changes require a transaction */
+ Assert(xid != InvalidTransactionId);
+
+ /* search the lookup table (we ignore the return value, found is enough) */
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ created ? HASH_ENTER : HASH_FIND,
+ &found);
+
+ /*
+ * If this is the "create" increment, we must not have found any
+ * pre-existing entry in the hash table (i.e. there must not be
+ * any conflicting sequence).
+ */
+ Assert(!(created && found));
+
+ /* But we must have either created or found an existing entry. */
+ Assert(created || found);
+
+ /*
+ * When creating the sequence, remember the XID of the transaction
+ * that created id.
+ */
+ if (created)
+ ent->xid = xid;
+
+ /* XXX Maybe check that we're still in the same top-level xact? */
+
+ /* OK, allocate and queue the change */
+ oldcontext = MemoryContextSwitchTo(rb->context);
+
+ change = ReorderBufferGetChange(rb);
+
+ change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
+ change->origin_id = origin_id;
+
+ memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
+
+ change->data.sequence.created = created;
+ change->data.sequence.tuple = tuplebuf;
+
+ /* add it to the same subxact that created the sequence */
+ ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+ else
+ {
+ /*
+ * This increment is for a sequence that was not created in any
+ * running transaction, so we treat it as non-transactional and
+ * just send it to the output plugin directly.
+ */
+ ReorderBufferTXN *txn = NULL;
+ volatile Snapshot snapshot_now = snapshot;
+ bool using_subtxn;
+
+#ifdef USE_ASSERT_CHECKING
+ /* All "creates" have to be handled as transactional. */
+ Assert(!created);
+
+ /* Make sure the sequence is not in the hash table. */
+ {
+ bool found;
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND, &found);
+ Assert(!found);
+ }
+#endif
+
+ if (xid != InvalidTransactionId)
+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
+
+ /* setup snapshot to allow catalog access */
+ SetupHistoricSnapshot(snapshot_now, NULL);
+
+ /*
+ * Decoding needs access to syscaches et al., which in turn use
+ * heavyweight locks and such. Thus we need to have enough state around to
+ * keep track of those. The easiest way is to simply use a transaction
+ * internally. That also allows us to easily enforce that nothing writes
+ * to the database by checking for xid assignments.
+ *
+ * When we're called via the SQL SRF there's already a transaction
+ * started, so start an explicit subtransaction there.
+ */
+ using_subtxn = IsTransactionOrTransactionBlock();
+
+ PG_TRY();
+ {
+ Relation relation;
+ HeapTuple tuple;
+ bool isnull;
+ int64 last_value, log_cnt;
+ bool is_called;
+ Oid reloid;
+
+ if (using_subtxn)
+ BeginInternalSubTransaction("sequence");
+ else
+ StartTransactionCommand();
+
+ reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(rnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+ tuple = &tuplebuf->tuple;
+
+ /*
+ * Extract the internal sequence values, describing the state.
+ *
+ * XXX Seems a bit strange to access it directly. Maybe there's
+ * a better / more correct way?
+ */
+ last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
+ log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
+ is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+
+ rb->sequence(rb, txn, lsn, relation, transactional, created,
+ last_value, log_cnt, is_called);
+
+ RelationClose(relation);
+
+ TeardownHistoricSnapshot(false);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+ }
+ PG_CATCH();
+ {
+ TeardownHistoricSnapshot(true);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+ }
+}
+
/*
* AssertTXNLsnOrder
* Verify LSN ordering of transaction lists in the reorderbuffer
@@ -1535,6 +1822,9 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
&found);
Assert(found);
+ /* Remove sequences created in this transaction (if any). */
+ ReorderBufferSequenceCleanup(rb, txn->xid);
+
/* remove entries spilled to disk */
if (rbtxn_is_serialized(txn))
ReorderBufferRestoreCleanup(rb, txn);
@@ -1950,6 +2240,42 @@ ReorderBufferApplyMessage(ReorderBuffer *rb, ReorderBufferTXN *txn,
change->data.msg.message);
}
+/*
+ * Helper function for ReorderBufferProcessTXN for applying sequences.
+ */
+static inline void
+ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ Relation relation, ReorderBufferChange *change,
+ bool streaming)
+{
+ HeapTuple tuple;
+ bool isnull;
+ int64 last_value, log_cnt;
+ bool is_called;
+
+ tuple = &change->data.sequence.tuple->tuple;
+
+ /*
+ * Extract the internal sequence values, describing the state.
+ *
+ * XXX Seems a bit strange to access it directly. Maybe there's
+ * a better / more correct way?
+ */
+ last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
+ log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
+ is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+
+ /* Only ever called from ReorderBufferApplySequence, so transational. */
+ if (streaming)
+ rb->stream_sequence(rb, txn, change->lsn, relation, true,
+ change->data.sequence.created,
+ last_value, log_cnt, is_called);
+ else
+ rb->sequence(rb, txn, change->lsn, relation, true,
+ change->data.sequence.created,
+ last_value, log_cnt, is_called);
+}
+
/*
* Function to store the command id and snapshot at the end of the current
* stream so that we can reuse the same while sending the next stream.
@@ -2392,6 +2718,31 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
elog(ERROR, "tuplecid value in changequeue");
break;
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ Assert(snapshot_now);
+
+ reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
+ change->data.sequence.relnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+
+ if (!RelationIsValid(relation))
+ elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
+ reloid,
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ if (RelationIsLogicallyLogged(relation))
+ ReorderBufferApplySequence(rb, txn, relation, change, streaming);
+
+ RelationClose(relation);
+ break;
}
}
@@ -3776,6 +4127,39 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ char *data;
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
+ /* make sure we have enough space */
+ ReorderBufferSerializeReserve(rb, sz);
+
+ data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
+ /* might have been reallocated above */
+ ondisk = (ReorderBufferDiskChange *) rb->outbuf;
+
+ if (len)
+ {
+ memcpy(data, &tup->tuple, sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ memcpy(data, tup->tuple.t_data, len);
+ data += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4040,6 +4424,22 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4341,6 +4741,30 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ uint32 tuplelen = ((HeapTuple) data)->t_len;
+
+ change->data.sequence.tuple =
+ ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
+
+ /* restore ->tuple */
+ memcpy(&change->data.sequence.tuple->tuple, data,
+ sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ /* reset t_data pointer into the new tuplebuf */
+ change->data.sequence.tuple->tuple.t_data =
+ ReorderBufferTupleBufData(change->data.sequence.tuple);
+
+ /* restore tuple data itself */
+ memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
+ data += tuplelen;
+ }
+ break;
+
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 40544dd4c7..5919fb90ee 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -48,6 +48,7 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
typedef struct xl_seq_rec
{
RelFileNode node;
+ bool created; /* is this a CREATE SEQUENCE */
/* SEQUENCE TUPLE DATA FOLLOWS AT THE END */
} xl_seq_rec;
diff --git a/src/include/replication/output_plugin.h b/src/include/replication/output_plugin.h
index 810495ed0e..57bc13f11c 100644
--- a/src/include/replication/output_plugin.h
+++ b/src/include/replication/output_plugin.h
@@ -88,6 +88,19 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
Size message_size,
const char *message);
+/*
+ * Called for the generic logical decoding sequences.
+ */
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ bool created,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Filter changes by origin.
*/
@@ -199,6 +212,20 @@ typedef void (*LogicalDecodeStreamMessageCB) (struct LogicalDecodingContext *ctx
Size message_size,
const char *message);
+/*
+ * Called for the streaming generic logical decoding sequences from in-progress
+ * transactions.
+ */
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ bool created,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Callback for streaming truncates from in-progress transactions.
*/
@@ -219,6 +246,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
@@ -237,6 +265,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index 5b40ff75f7..63e6ed037b 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -64,7 +64,8 @@ enum ReorderBufferChangeType
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT,
- REORDER_BUFFER_CHANGE_TRUNCATE
+ REORDER_BUFFER_CHANGE_TRUNCATE,
+ REORDER_BUFFER_CHANGE_SEQUENCE
};
/* forward declaration */
@@ -158,6 +159,14 @@ typedef struct ReorderBufferChange
uint32 ninvalidations; /* Number of messages */
SharedInvalidationMessage *invalidations; /* invalidation message */
} inval;
+
+ /* Context data for Sequence changes */
+ struct
+ {
+ RelFileNode relnode;
+ bool created;
+ ReorderBufferTupleBuf *tuple;
+ } sequence;
} data;
/*
@@ -430,6 +439,15 @@ typedef void (*ReorderBufferMessageCB) (ReorderBuffer *rb,
const char *prefix, Size sz,
const char *message);
+/* sequence callback signature */
+typedef void (*ReorderBufferSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* begin prepare callback signature */
typedef void (*ReorderBufferBeginPrepareCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn);
@@ -496,6 +514,15 @@ typedef void (*ReorderBufferStreamMessageCB) (
const char *prefix, Size sz,
const char *message);
+/* stream sequence callback signature */
+typedef void (*ReorderBufferStreamSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* stream truncate callback signature */
typedef void (*ReorderBufferStreamTruncateCB) (
ReorderBuffer *rb,
@@ -511,6 +538,12 @@ struct ReorderBuffer
*/
HTAB *by_txn;
+ /*
+ * relfilenode => XID lookup table for sequences created in a transaction
+ * (also includes altered sequences, which assigns new relfilenode)
+ */
+ HTAB *sequences;
+
/*
* Transactions that could be a toplevel xact, ordered by LSN of the first
* record bearing that xid.
@@ -541,6 +574,7 @@ struct ReorderBuffer
ReorderBufferApplyTruncateCB apply_truncate;
ReorderBufferCommitCB commit;
ReorderBufferMessageCB message;
+ ReorderBufferSequenceCB sequence;
/*
* Callbacks to be called when streaming a transaction at prepare time.
@@ -560,6 +594,7 @@ struct ReorderBuffer
ReorderBufferStreamCommitCB stream_commit;
ReorderBufferStreamChangeCB stream_change;
ReorderBufferStreamMessageCB stream_message;
+ ReorderBufferStreamSequenceCB stream_sequence;
ReorderBufferStreamTruncateCB stream_truncate;
/*
@@ -635,6 +670,10 @@ void ReorderBufferQueueChange(ReorderBuffer *, TransactionId,
void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapshot, XLogRecPtr lsn,
bool transactional, const char *prefix,
Size message_size, const char *message);
+void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
@@ -682,4 +721,7 @@ void ReorderBufferSetRestartPoint(ReorderBuffer *, XLogRecPtr ptr);
void StartupReorderBuffer(void);
+bool ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created);
+
#endif
--
2.31.1
On 22.11.21 01:47, Tomas Vondra wrote:
So I think just decoding the sequence tuples is a better solution - for
large transactions (consuming many values from the sequence) it may be
more expensive (i.e. send more records to replica). But I doubt that
matters too much - it's likely negligible compared to other data for
large transactions.
I agree that the original approach is better. It was worth trying out
this alternative, but it seems quite complicated. I note that a lot of
additional code had to be added around several areas of the code,
whereas the original patch really just touched the logical decoding
code, as it should. This doesn't prevent anyone from attempting to
optimize things somehow in the future, but for now let's move forward
with the simple approach.
On 22. 11. 2021, at 16:44, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:
On 22.11.21 01:47, Tomas Vondra wrote:
So I think just decoding the sequence tuples is a better solution - for large transactions (consuming many values from the sequence) it may be more expensive (i.e. send more records to replica). But I doubt that matters too much - it's likely negligible compared to other data for large transactions.
I agree that the original approach is better. It was worth trying out this alternative, but it seems quite complicated. I note that a lot of additional code had to be added around several areas of the code, whereas the original patch really just touched the logical decoding code, as it should. This doesn't prevent anyone from attempting to optimize things somehow in the future, but for now let's move forward with the simple approach.
+1
--
Petr Jelinek
Hi,
On 2021-09-25 22:05:43 +0200, Hannu Krosing wrote:
If our aim is just to make sure that all user-visible data in
*transactional* tables is consistent with sequence state then one
very much simplified approach to this could be to track the results of
nextval() calls in a transaction at COMMIT put the latest sequence
value in WAL (or just track the sequences affected and put the latest
sequence state in WAL at commit which needs extra read of sequence but
protects against race conditions with parallel transactions which get
rolled back later)
I think this is a bad idea. It's architecturally more complicated and prevents
use cases because sequence values aren't guaranteed to be as new as on the
original system. You'd need to track all sequence use somehow *even if there
is no relevant WAL generated* in a transaction. There's simply no evidence of
sequence use in a transaction if that transaction uses a previously logged
sequence value.
Greetings,
Andres Freund
On 11/23/21 02:01, Andres Freund wrote:
Hi,
On 2021-09-25 22:05:43 +0200, Hannu Krosing wrote:
If our aim is just to make sure that all user-visible data in
*transactional* tables is consistent with sequence state then one
very much simplified approach to this could be to track the results of
nextval() calls in a transaction at COMMIT put the latest sequence
value in WAL (or just track the sequences affected and put the latest
sequence state in WAL at commit which needs extra read of sequence but
protects against race conditions with parallel transactions which get
rolled back later)I think this is a bad idea. It's architecturally more complicated and prevents
use cases because sequence values aren't guaranteed to be as new as on the
original system. You'd need to track all sequence use somehow *even if there
is no relevant WAL generated* in a transaction. There's simply no evidence of
sequence use in a transaction if that transaction uses a previously logged
sequence value.
Not quite. We already have a cache of all sequences used by a session
(see seqhashtab in sequence.c), and it's not that hard to extend it to
per-transaction tracking. That's what the last two versions do, mostly.
But there are various issues with that approach, described in my last
message(s).
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
I have checked the 0001 and 0003 patches. (I think we have dismissed
the approach in 0002 for now. And let's talk about 0004 later.)
I have attached a few fixup patches, described further below.
# 0001
The argument "create" for fill_seq_with_data() is always true (and
patch 0002 removes it). So I'm not sure if it's needed. If it is, it
should be documented somewhere.
About the comment you added in nextval_internal(): It's a good
explanation, so I would leave it in. I also agree with the
conclusion of the comment that the current solution is reasonable. We
probably don't need the same comment again in fill_seq_with_data() and
again in do_setval(). Note that both of the latter functions already
point to nextval_interval() for other comments, so the same can be
relied upon here as well.
The order of the new fields sequence_cb and stream_sequence_cb is a
bit inconsistent compared to truncate_cb and stream_truncate_cb.
Maybe take another look to make the order more uniform.
Some documentation needs to be added to the Logical Decoding chapter.
I have attached a patch that I think catches all the places that need
to be updated, with some details left for you to fill in. ;-) The
ordering of the some of the documentation chunks reflects the order in
which the callbacks appear in the header files, which might not be
optimal; see above.
In reorderbuffer.c, you left a comment about how to access a sequence
tuple. There is an easier way, using Form_pg_sequence_data, which is
how sequence.c also does it. See attached patch.
# 0003
The tests added in 0003 don't pass for me. It appears that you might
have forgotten to update the expected files after you added some tests
or something. The cfbot shows the same. See attached patch for a
correction, but do check what your intent was.
As mentioned before, we probably don't need the skip-sequences option
in test_decoding.
Attachments:
0001-Whitespace-fixes.patchtext/plain; charset=UTF-8; name=0001-Whitespace-fixes.patchDownload
From 2dd9a0e76b9496016e7abebebd0006ddee72d3c1 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter@eisentraut.org>
Date: Tue, 7 Dec 2021 14:19:42 +0100
Subject: [PATCH 1/4] Whitespace fixes
---
contrib/test_decoding/expected/sequence.out | 6 +++---
contrib/test_decoding/sql/sequence.sql | 6 +++---
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
index 17c88990b1..ca55bcebc3 100644
--- a/contrib/test_decoding/expected/sequence.out
+++ b/contrib/test_decoding/expected/sequence.out
@@ -177,7 +177,7 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'inc
------
(0 rows)
--- rollback on table creation with serial column
+-- rollback on table creation with serial column
BEGIN;
CREATE TABLE test_table (a SERIAL, b INT);
INSERT INTO test_table (b) VALUES (100);
@@ -202,7 +202,7 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'inc
------
(0 rows)
--- rollback on table with serial column
+-- rollback on table with serial column
CREATE TABLE test_table (a SERIAL, b INT);
BEGIN;
INSERT INTO test_table (b) VALUES (100);
@@ -222,7 +222,7 @@ INSERT INTO test_table (b) VALUES (700);
INSERT INTO test_table (b) VALUES (800);
INSERT INTO test_table (b) VALUES (900);
ROLLBACK;
--- check table and sequence values after rollback
+-- check table and sequence values after rollback
SELECT * from test_table_a_seq;
last_value | log_cnt | is_called
------------+---------+-----------
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
index d8a34738f3..21c4b79222 100644
--- a/contrib/test_decoding/sql/sequence.sql
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -48,7 +48,7 @@ CREATE TABLE test_table (a INT);
ROLLBACK;
SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
--- rollback on table creation with serial column
+-- rollback on table creation with serial column
BEGIN;
CREATE TABLE test_table (a SERIAL, b INT);
INSERT INTO test_table (b) VALUES (100);
@@ -65,7 +65,7 @@ CREATE TABLE test_table (a SERIAL, b INT);
ROLLBACK;
SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
--- rollback on table with serial column
+-- rollback on table with serial column
CREATE TABLE test_table (a SERIAL, b INT);
BEGIN;
@@ -82,7 +82,7 @@ CREATE TABLE test_table (a SERIAL, b INT);
INSERT INTO test_table (b) VALUES (900);
ROLLBACK;
--- check table and sequence values after rollback
+-- check table and sequence values after rollback
SELECT * from test_table_a_seq;
SELECT nextval('test_table_a_seq');
--
2.34.1
0002-Make-tests-pass.patchtext/plain; charset=UTF-8; name=0002-Make-tests-pass.patchDownload
From c5e9eb2792bd276e493aed582627e93b4c681374 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter@eisentraut.org>
Date: Tue, 7 Dec 2021 15:29:34 +0100
Subject: [PATCH 2/4] Make tests pass
---
contrib/test_decoding/expected/sequence.out | 18 ++++++++++++------
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
index ca55bcebc3..d94e185fb9 100644
--- a/contrib/test_decoding/expected/sequence.out
+++ b/contrib/test_decoding/expected/sequence.out
@@ -260,13 +260,15 @@ ROLLBACK TO SAVEPOINT a;
DROP TABLE test_table;
COMMIT;
SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
- data
---------------------------------------------------------------
+ data
+-------------------------------------------------------------------------------------------------
BEGIN
+ sequence public.test_table_a_seq: transactional:1 created:1 last_value:1 log_cnt:0 is_called:0
+ sequence public.test_table_a_seq: transactional:1 created:0 last_value:33 log_cnt:0 is_called:1
table public.test_table: INSERT: a[integer]:1 b[integer]:100
table public.test_table: INSERT: a[integer]:2 b[integer]:200
COMMIT
-(4 rows)
+(6 rows)
-- savepoint test on table with serial column
BEGIN;
@@ -307,11 +309,15 @@ SELECT * FROM test_sequence;
DROP SEQUENCE test_sequence;
COMMIT;
SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
- data
---------
+ data
+------------------------------------------------------------------------------------------------
BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:1 log_cnt:0 is_called:0
+ sequence public.test_sequence: transactional:1 created:0 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 created:0 last_value:3000 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 created:0 last_value:3033 log_cnt:0 is_called:1
COMMIT
-(2 rows)
+(6 rows)
SELECT pg_drop_replication_slot('regression_slot');
pg_drop_replication_slot
--
2.34.1
0003-Some-documentation-updates.patchtext/plain; charset=UTF-8; name=0003-Some-documentation-updates.patchDownload
From 0338de62cd955b98244b5b624a2bcb7af76465b0 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter@eisentraut.org>
Date: Tue, 7 Dec 2021 15:29:51 +0100
Subject: [PATCH 3/4] Some documentation updates
---
doc/src/sgml/logicaldecoding.sgml | 54 +++++++++++++++++++++++++++++--
1 file changed, 51 insertions(+), 3 deletions(-)
diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml
index b6353c7a12..eea4f3cd50 100644
--- a/doc/src/sgml/logicaldecoding.sgml
+++ b/doc/src/sgml/logicaldecoding.sgml
@@ -458,6 +458,7 @@ <title>Initialization Function</title>
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
LogicalDecodeFilterPrepareCB filter_prepare_cb;
@@ -472,6 +473,7 @@ <title>Initialization Function</title>
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
@@ -481,9 +483,12 @@ <title>Initialization Function</title>
and <function>commit_cb</function> callbacks are required,
while <function>startup_cb</function>,
<function>filter_by_origin_cb</function>, <function>truncate_cb</function>,
+ <function>sequence_cb</function>,
and <function>shutdown_cb</function> are optional.
If <function>truncate_cb</function> is not set but a
<command>TRUNCATE</command> is to be decoded, the action will be ignored.
+ Similarly, if <function>sequence_cb</function> is not set and a sequence
+ change is to be decoded, the action will be ignored.
</para>
<para>
@@ -492,7 +497,7 @@ <title>Initialization Function</title>
<function>stream_stop_cb</function>, <function>stream_abort_cb</function>,
<function>stream_commit_cb</function>, <function>stream_change_cb</function>,
and <function>stream_prepare_cb</function>
- are required, while <function>stream_message_cb</function> and
+ are required, while <function>stream_message_cb</function>, <function>stream_sequence_cb</function>, and
<function>stream_truncate_cb</function> are optional.
</para>
@@ -808,6 +813,27 @@ <title>Generic Message Callback</title>
</para>
</sect3>
+ <sect3 id="logicaldecoding-output-plugin-sequence">
+ <title>Sequence Callback</title>
+
+ <para>
+ The <function>sequence_cb</function> callback is called for actions that
+ change a sequence.
+<programlisting>
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ bool created,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+</programlisting>
+ TODO
+ </para>
+ </sect3>
+
<sect3 id="logicaldecoding-output-plugin-filter-prepare">
<title>Prepare Filter Callback</title>
@@ -1017,6 +1043,28 @@ <title>Stream Message Callback</title>
</para>
</sect3>
+ <sect3 id="logicaldecoding-output-plugin-stream-sequence">
+ <title>Stream Sequence Callback</title>
+ <para>
+ The <function>stream_sequence_cb</function> callback is called for
+ actions that change a sequence in a block of streamed changes
+ (demarcated by <function>stream_start_cb</function> and
+ <function>stream_stop_cb</function> calls).
+<programlisting>
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ bool created,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+</programlisting>
+ TODO
+ </para>
+ </sect3>
+
<sect3 id="logicaldecoding-output-plugin-stream-truncate">
<title>Stream Truncate Callback</title>
<para>
@@ -1197,8 +1245,8 @@ <title>Streaming of Large Transactions for Logical Decoding</title>
in-progress transactions. There are multiple required streaming callbacks
(<function>stream_start_cb</function>, <function>stream_stop_cb</function>,
<function>stream_abort_cb</function>, <function>stream_commit_cb</function>
- and <function>stream_change_cb</function>) and two optional callbacks
- (<function>stream_message_cb</function> and <function>stream_truncate_cb</function>).
+ and <function>stream_change_cb</function>) and multiple optional callbacks
+ (<function>stream_message_cb</function>, <function>stream_sequence_cb</function>, and <function>stream_truncate_cb</function>).
Also, if streaming of two-phase commands is to be supported, then additional
callbacks must be provided. (See <xref linkend="logicaldecoding-two-phase-commits"/>
for details).
--
2.34.1
0004-Simplify-accessing-sequence-tuple-data.patchtext/plain; charset=UTF-8; name=0004-Simplify-accessing-sequence-tuple-data.patchDownload
From 9f29fac7e479c255fb247d201fe8691655c11fbf Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter@eisentraut.org>
Date: Tue, 7 Dec 2021 15:31:36 +0100
Subject: [PATCH 4/4] Simplify accessing sequence tuple data
---
.../replication/logical/reorderbuffer.c | 37 ++++---------------
1 file changed, 8 insertions(+), 29 deletions(-)
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 434926459f..b4fa16bebe 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -120,6 +120,7 @@
#include "access/xact.h"
#include "access/xlog_internal.h"
#include "catalog/catalog.h"
+#include "commands/sequence.h"
#include "lib/binaryheap.h"
#include "miscadmin.h"
#include "pgstat.h"
@@ -1089,9 +1090,7 @@ ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
{
Relation relation;
HeapTuple tuple;
- bool isnull;
- int64 last_value, log_cnt;
- bool is_called;
+ Form_pg_sequence_data seq;
Oid reloid;
if (using_subtxn)
@@ -1108,19 +1107,10 @@ ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
relation = RelationIdGetRelation(reloid);
tuple = &tuplebuf->tuple;
-
- /*
- * Extract the internal sequence values, describing the state.
- *
- * XXX Seems a bit strange to access it directly. Maybe there's
- * a better / more correct way?
- */
- last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
- log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
- is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
rb->sequence(rb, txn, lsn, relation, transactional, created,
- last_value, log_cnt, is_called);
+ seq->last_value, seq->log_cnt, seq->is_called);
RelationClose(relation);
@@ -2249,31 +2239,20 @@ ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
bool streaming)
{
HeapTuple tuple;
- bool isnull;
- int64 last_value, log_cnt;
- bool is_called;
+ Form_pg_sequence_data seq;
tuple = &change->data.sequence.tuple->tuple;
-
- /*
- * Extract the internal sequence values, describing the state.
- *
- * XXX Seems a bit strange to access it directly. Maybe there's
- * a better / more correct way?
- */
- last_value = heap_getattr(tuple, 1, RelationGetDescr(relation), &isnull);
- log_cnt = heap_getattr(tuple, 2, RelationGetDescr(relation), &isnull);
- is_called = heap_getattr(tuple, 3, RelationGetDescr(relation), &isnull);
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
/* Only ever called from ReorderBufferApplySequence, so transational. */
if (streaming)
rb->stream_sequence(rb, txn, change->lsn, relation, true,
change->data.sequence.created,
- last_value, log_cnt, is_called);
+ seq->last_value, seq->log_cnt, seq->is_called);
else
rb->sequence(rb, txn, change->lsn, relation, true,
change->data.sequence.created,
- last_value, log_cnt, is_called);
+ seq->last_value, seq->log_cnt, seq->is_called);
}
/*
--
2.34.1
On 12/7/21 15:38, Peter Eisentraut wrote:
I have checked the 0001 and 0003 patches. (I think we have dismissed
the approach in 0002 for now. And let's talk about 0004 later.)
Right, I think that's correct.
I have attached a few fixup patches, described further below.
# 0001
The argument "create" for fill_seq_with_data() is always true (and
patch 0002 removes it). So I'm not sure if it's needed. If it is, it
should be documented somewhere.
Good point. I think it could be removed, but IIRC it'll be needed when
adding sequence decoding to the built-in replication, and that patch is
meant to be just an implementation of the API, without changing WAL.
OTOH I don't see it in the last version of that patch (in ResetSequence2
call) so maybe it's not really needed. I've kept it for now, but I'll
investigate.
About the comment you added in nextval_internal(): It's a good
explanation, so I would leave it in. I also agree with the
conclusion of the comment that the current solution is reasonable. We
probably don't need the same comment again in fill_seq_with_data() and
again in do_setval(). Note that both of the latter functions already
point to nextval_interval() for other comments, so the same can be
relied upon here as well.
True. I moved it a bit in nextval_internal() and removed the other
copies. The existing references should be enough.
The order of the new fields sequence_cb and stream_sequence_cb is a
bit inconsistent compared to truncate_cb and stream_truncate_cb.
Maybe take another look to make the order more uniform.
You mean in OutputPluginCallbacks? I'd actually argue it's the truncate
callbacks that are inconsistent - in the regular section truncate_cb is
right before commit_cb, while in the streaming section it's at the end.
Or what order do you think would be better? I'm fine with changing it.
Some documentation needs to be added to the Logical Decoding chapter.
I have attached a patch that I think catches all the places that need
to be updated, with some details left for you to fill in. ;-) The
ordering of the some of the documentation chunks reflects the order in
which the callbacks appear in the header files, which might not be
optimal; see above.
Thanks. I added a bit about the callbacks being optional and what the
parameters mean (only for sequence_cb, as the stream_ callbacks
generally don't copy that bit).
In reorderbuffer.c, you left a comment about how to access a sequence
tuple. There is an easier way, using Form_pg_sequence_data, which is
how sequence.c also does it. See attached patch.
Yeah, that looks much nicer.
# 0003
The tests added in 0003 don't pass for me. It appears that you might
have forgotten to update the expected files after you added some tests
or something. The cfbot shows the same. See attached patch for a
correction, but do check what your intent was.
Yeah. I suspect I removed the expected results due to the experimental
rework. I haven't noticed that because some of the tests for the
built-in replication are expected to fail.
Attached is an updated version of the first two parts (infrastructure
and test_decoding), with all your fixes merged.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Logical-decoding-of-sequences-20211208.patchtext/x-patch; charset=UTF-8; name=0001-Logical-decoding-of-sequences-20211208.patchDownload
From 999514b9f802db9b888543532355e9c36646da30 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 24 Sep 2021 00:41:33 +0200
Subject: [PATCH 1/2] Logical decoding of sequences
This adds the infrastructure for logical decoding of sequences. Support
for built-in logical replication and test_decoding is added separately.
When decoding sequences, we differentiate between a sequences created in
a (running) transaction, and sequences created in other (already
committed) transactions. Changes for sequences created in the same
top-level transaction are treated as "transactional" i.e. just like any
other change from that transaction (and discarded in case of a
rollback). Changes for sequences created earlier are treated as not
transactional - are processed immediately, as if performed outside any
transaction (and thus not rolled back).
This mixed behavior is necessary - sequences are non-transactional (e.g.
ROLLBACK does not undo the sequence increments). But for new sequences,
we need to handle them in a transactional way, because if we ever get
some DDL support, the sequence won't exist until the transaction gets
applied. So we need to ensure the increments don't happen until the
sequence gets created.
To differentiate which sequences are "old" and which were created in a
still-running transaction, we track sequences created in running
transactions in a hash table. Each sequence is identified by it's
relfilenode, and at transaction commit we discard all sequences created
in that particular transaction. For each sequence we track the XID of
the (sub)transaction that created it, and we cleanup the sequences for
each subtransaction when it completes (commit/rollback).
We don't use the XID to check if it's the same top-level transaction.
It's enough to know it was created in an in-progress transaction, and we
know it must be the current one because otherwise it wouldn't see the
sequence object.
The main changes in this patch are:
1) ensure WAL-logging of all necessary info for sequence advances
We need to be able to associate the advance with a XID, but until now
sequence advance might have XID 0 if it was the first thing the
transaction did. So we ensure the transaction has XID.
Note: This is needed because of subxacts. A XID 0 might still have the
sequence created in a different subxact of the same top-level xact.
Then there's a "created" flag added to the xl_seq_rec, which is
necessary to differentiate WAL for created/reset sequences.
2) decoding / queueing of sequences into ReorderBuffer
This is mostly copy-paste of existing code for other decoded events.
3) tracking sequences created in in-progress transactions
We use a simple hash table, indexed by relfilenode.
---
doc/src/sgml/logicaldecoding.sgml | 65 ++-
src/backend/commands/sequence.c | 40 +-
src/backend/replication/logical/decode.c | 131 +++++-
src/backend/replication/logical/logical.c | 89 ++++
.../replication/logical/reorderbuffer.c | 403 ++++++++++++++++++
src/include/commands/sequence.h | 1 +
src/include/replication/output_plugin.h | 29 ++
src/include/replication/reorderbuffer.h | 44 +-
8 files changed, 791 insertions(+), 11 deletions(-)
diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml
index b6353c7a12..4ede56678c 100644
--- a/doc/src/sgml/logicaldecoding.sgml
+++ b/doc/src/sgml/logicaldecoding.sgml
@@ -458,6 +458,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
LogicalDecodeFilterPrepareCB filter_prepare_cb;
@@ -472,6 +473,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
@@ -481,9 +483,12 @@ typedef void (*LogicalOutputPluginInit) (struct OutputPluginCallbacks *cb);
and <function>commit_cb</function> callbacks are required,
while <function>startup_cb</function>,
<function>filter_by_origin_cb</function>, <function>truncate_cb</function>,
+ <function>sequence_cb</function>,
and <function>shutdown_cb</function> are optional.
If <function>truncate_cb</function> is not set but a
<command>TRUNCATE</command> is to be decoded, the action will be ignored.
+ Similarly, if <function>sequence_cb</function> is not set and a sequence
+ change is to be decoded, the action will be ignored.
</para>
<para>
@@ -492,7 +497,8 @@ typedef void (*LogicalOutputPluginInit) (struct OutputPluginCallbacks *cb);
<function>stream_stop_cb</function>, <function>stream_abort_cb</function>,
<function>stream_commit_cb</function>, <function>stream_change_cb</function>,
and <function>stream_prepare_cb</function>
- are required, while <function>stream_message_cb</function> and
+ are required, while <function>stream_message_cb</function>,
+ <function>stream_sequence_cb</function>, and
<function>stream_truncate_cb</function> are optional.
</para>
@@ -808,6 +814,37 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
</para>
</sect3>
+ <sect3 id="logicaldecoding-output-plugin-sequence">
+ <title>Sequence Callback</title>
+
+ <para>
+ The optional <function>sequence_cb</function> callback is called for
+ actions that update a sequence value.
+<programlisting>
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ bool created,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+</programlisting>
+ The <parameter>txn</parameter> parameter contains meta information about
+ the transaction the sequence change is part of. Note however that it can
+ be NULL when the sequence change is non-transactional and the XID was not
+ assigned yet in the transaction which updated the sequence value.
+ The <parameter>sequence_lsn</parameter> has WAL location of the sequence
+ update. The <parameter>transactional</parameter> says if the sequence has
+ to be replayed as part of the transaction or directly.
+
+ The <parameter>created</parameter>, <parameter>last_value</parameter>,
+ <parameter>log_cnt</parameter> and <parameter>is_called</parameter> parameters
+ describe the sequence change.
+ </para>
+ </sect3>
+
<sect3 id="logicaldecoding-output-plugin-filter-prepare">
<title>Prepare Filter Callback</title>
@@ -1017,6 +1054,27 @@ typedef void (*LogicalDecodeStreamMessageCB) (struct LogicalDecodingContext *ctx
</para>
</sect3>
+ <sect3 id="logicaldecoding-output-plugin-stream-sequence">
+ <title>Stream Sequence Callback</title>
+ <para>
+ The optional <function>stream_sequence_cb</function> callback is called
+ for actions that change a sequence in a block of streamed changes
+ (demarcated by <function>stream_start_cb</function> and
+ <function>stream_stop_cb</function> calls).
+<programlisting>
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ bool created,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+</programlisting>
+ </para>
+ </sect3>
+
<sect3 id="logicaldecoding-output-plugin-stream-truncate">
<title>Stream Truncate Callback</title>
<para>
@@ -1197,8 +1255,9 @@ OutputPluginWrite(ctx, true);
in-progress transactions. There are multiple required streaming callbacks
(<function>stream_start_cb</function>, <function>stream_stop_cb</function>,
<function>stream_abort_cb</function>, <function>stream_commit_cb</function>
- and <function>stream_change_cb</function>) and two optional callbacks
- (<function>stream_message_cb</function> and <function>stream_truncate_cb</function>).
+ and <function>stream_change_cb</function>) and multiple optional callbacks
+ (<function>stream_message_cb</function>, <function>stream_sequence_cb</function>,
+ and <function>stream_truncate_cb</function>).
Also, if streaming of two-phase commands is to be supported, then additional
callbacks must be provided. (See <xref linkend="logicaldecoding-two-phase-commits"/>
for details).
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 72bfdc07a4..c47c09a94e 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -94,7 +94,7 @@ static HTAB *seqhashtab = NULL; /* hash table for SeqTable items */
*/
static SeqTableData *last_used_seq = NULL;
-static void fill_seq_with_data(Relation rel, HeapTuple tuple);
+static void fill_seq_with_data(Relation rel, HeapTuple tuple, bool create);
static Relation lock_and_open_sequence(SeqTable seq);
static void create_seq_hashtable(void);
static void init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel);
@@ -222,7 +222,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
/* now initialize the sequence's data */
tuple = heap_form_tuple(tupDesc, value, null);
- fill_seq_with_data(rel, tuple);
+ fill_seq_with_data(rel, tuple, true);
/* process OWNED BY if given */
if (owned_by)
@@ -327,7 +327,7 @@ ResetSequence(Oid seq_relid)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seq_rel, tuple);
+ fill_seq_with_data(seq_rel, tuple, true);
/* Clear local cache so that we don't think we have cached numbers */
/* Note that we do not change the currval() state */
@@ -340,7 +340,7 @@ ResetSequence(Oid seq_relid)
* Initialize a sequence's relation with the specified tuple as content
*/
static void
-fill_seq_with_data(Relation rel, HeapTuple tuple)
+fill_seq_with_data(Relation rel, HeapTuple tuple, bool create)
{
Buffer buf;
Page page;
@@ -378,7 +378,10 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(rel))
+ {
GetTopTransactionId();
+ GetCurrentTransactionId();
+ }
START_CRIT_SECTION();
@@ -399,6 +402,7 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = rel->rd_node;
+ xlrec.created = create;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) tuple->t_data, tuple->t_len);
@@ -502,7 +506,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
/*
* Insert the modified tuple into the new storage file.
*/
- fill_seq_with_data(seqrel, newdatatuple);
+ fill_seq_with_data(seqrel, newdatatuple, true);
}
/* process OWNED BY if given */
@@ -764,9 +768,29 @@ nextval_internal(Oid relid, bool check_permissions)
* It's sufficient to ensure the toplevel transaction has an xid, no need
* to assign xids subxacts, that'll already trigger an appropriate wait.
* (Have to do that here, so we're outside the critical section)
+ *
+ * We have to ensure we have a proper XID, which will be included in
+ * the XLOG record by XLogRecordAssemble. Otherwise the first nextval()
+ * in a subxact (without any preceding changes) would get XID 0, and it
+ * would then be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence.
+ *
+ * XXX This might seem unnecessary, because if there's no XID the xact
+ * couldn't have done anything important yet, e.g. it could not have
+ * created a sequence. But that's incorrect, as it ignores subxacts. The
+ * current subtransaction might not have done anything yet (thus no XID),
+ * but an earlier one might have created the sequence.
+ *
+ * XXX Not sure if this is the best solution. Maybe do this only with
+ * wal_level=logical to minimize the overhead. OTOH advancing the
+ * sequence is likely followed by using the value(s) for some other
+ * activity, which assigns the XID.
*/
if (logit && RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ GetCurrentTransactionId();
+ }
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -803,6 +827,7 @@ nextval_internal(Oid relid, bool check_permissions)
seq->log_cnt = 0;
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -977,7 +1002,10 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ GetCurrentTransactionId();
+ }
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -999,6 +1027,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
+
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index a2b69511b4..b61f2e264a 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -42,6 +42,7 @@
#include "replication/reorderbuffer.h"
#include "replication/snapbuild.h"
#include "storage/standby.h"
+#include "commands/sequence.h"
typedef struct XLogRecordBuffer
{
@@ -74,10 +75,11 @@ static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
bool two_phase);
static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_prepare *parsed);
-
+static void DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
/* common function to decode tuples */
static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);
+static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);
/* helper functions for decoding transactions */
static inline bool FilterPrepare(LogicalDecodingContext *ctx,
@@ -158,6 +160,10 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
DecodeLogicalMsgOp(ctx, &buf);
break;
+ case RM_SEQ_ID:
+ DecodeSequence(ctx, &buf);
+ break;
+
/*
* Rmgrs irrelevant for logical decoding; they describe stuff not
* represented in logical decoding. Add new rmgrs in rmgrlist.h's
@@ -173,7 +179,6 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
case RM_HASH_ID:
case RM_GIN_ID:
case RM_GIST_ID:
- case RM_SEQ_ID:
case RM_SPGIST_ID:
case RM_BRIN_ID:
case RM_COMMIT_TS_ID:
@@ -1313,3 +1318,125 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
(txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
+
+/*
+ * Decode Sequence Tuple
+ */
+static void
+DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
+{
+ int datalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;
+
+ Assert(datalen >= 0);
+
+ tuple->tuple.t_len = datalen + SizeofHeapTupleHeader;;
+
+ ItemPointerSetInvalid(&tuple->tuple.t_self);
+
+ tuple->tuple.t_tableOid = InvalidOid;
+
+ memcpy(((char *) tuple->tuple.t_data),
+ data + sizeof(xl_seq_rec),
+ SizeofHeapTupleHeader);
+
+ memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,
+ data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,
+ datalen);
+}
+
+/*
+ * Handle sequence decode
+ *
+ * Decoding sequences is a bit tricky, because while most sequence actions
+ * are non-transactional (not subject to rollback), some need to be handled
+ * as transactional.
+ *
+ * By default, a sequence increment is non-transactional - we must not queue
+ * it in a transaction as other changes, because the transaction might get
+ * rolled back and we'd discard the increment. The downstream would not be
+ * notified about the increment, which is wrong.
+ *
+ * On the other hand, the sequence may be created in a transaction. In this
+ * case we *should* queue the change as other changes in the transaction,
+ * because we don't want to send the increments for unknown sequence to the
+ * plugin - it might get confused about which sequence it's related to etc.
+ */
+static void
+DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ SnapBuild *builder = ctx->snapshot_builder;
+ ReorderBufferTupleBuf *tuplebuf;
+ RelFileNode target_node;
+ XLogReaderState *r = buf->record;
+ char *tupledata = NULL;
+ Size tuplelen;
+ Size datalen = 0;
+ TransactionId xid = XLogRecGetXid(r);
+ uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
+ xl_seq_rec *xlrec;
+ Snapshot snapshot;
+ RepOriginId origin_id = XLogRecGetOrigin(r);
+ bool transactional;
+
+ /* only decode changes flagged with XLOG_SEQ_LOG */
+ if (info != XLOG_SEQ_LOG)
+ elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
+
+ /*
+ * If we don't have snapshot or we are just fast-forwarding, there is no
+ * point in decoding messages.
+ */
+ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||
+ ctx->fast_forward)
+ return;
+
+ /* only interested in our database */
+ XLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);
+ if (target_node.dbNode != ctx->slot->data.database)
+ return;
+
+ /* output plugin doesn't look for this origin, no need to queue */
+ if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
+ return;
+
+ tupledata = XLogRecGetData(r);
+ datalen = XLogRecGetDataLen(r);
+ tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);
+
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_seq_rec *) XLogRecGetData(r);
+
+ /* XXX how could we have sequence change without data? */
+ if(!datalen || !tupledata)
+ return;
+
+ tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
+ DecodeSeqTuple(tupledata, datalen, tuplebuf);
+
+ /*
+ * Should we handle the sequence increment as transactional or not?
+ *
+ * If the sequence was created in a still-running transaction, treat
+ * it as transactional and queue the increments. Otherwise it needs
+ * to be treated as non-transactional, in which case we send it to
+ * the plugin right away.
+ */
+ transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
+ target_node,
+ xlrec->created);
+
+ /* Skip the change if already processed (per the snapshot). */
+ if (transactional &&
+ !SnapBuildProcessChange(builder, xid, buf->origptr))
+ return;
+ else if (!transactional &&
+ (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
+ SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ return;
+
+ /* Queue the increment (or send immediately if not transactional). */
+ snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
+ ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
+ origin_id, target_node, transactional,
+ xlrec->created, tuplebuf);
+}
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index 10cbdea124..582e8c2f7d 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -73,6 +73,10 @@ static void truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
static void message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
/* streaming callbacks */
static void stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
@@ -90,6 +94,10 @@ static void stream_change_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn
static void stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static void stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[], ReorderBufferChange *change);
@@ -218,6 +226,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->apply_truncate = truncate_cb_wrapper;
ctx->reorder->commit = commit_cb_wrapper;
ctx->reorder->message = message_cb_wrapper;
+ ctx->reorder->sequence = sequence_cb_wrapper;
/*
* To support streaming, we require start/stop/abort/commit/change
@@ -234,6 +243,7 @@ StartupDecodingContext(List *output_plugin_options,
(ctx->callbacks.stream_commit_cb != NULL) ||
(ctx->callbacks.stream_change_cb != NULL) ||
(ctx->callbacks.stream_message_cb != NULL) ||
+ (ctx->callbacks.stream_sequence_cb != NULL) ||
(ctx->callbacks.stream_truncate_cb != NULL);
/*
@@ -251,6 +261,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->stream_commit = stream_commit_cb_wrapper;
ctx->reorder->stream_change = stream_change_cb_wrapper;
ctx->reorder->stream_message = stream_message_cb_wrapper;
+ ctx->reorder->stream_sequence = stream_sequence_cb_wrapper;
ctx->reorder->stream_truncate = stream_truncate_cb_wrapper;
@@ -1203,6 +1214,43 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ if (ctx->callbacks.sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ created, last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr first_lsn)
@@ -1508,6 +1556,47 @@ stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ /* We're only supposed to call this when streaming is supported. */
+ Assert(ctx->streaming);
+
+ /* this callback is optional */
+ if (ctx->callbacks.stream_sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "stream_sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ created, last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[],
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 46e66608cf..b4fa16bebe 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -77,6 +77,35 @@
* a bit more memory to the oldest subtransactions, because it's likely
* they are the source for the next sequence of changes.
*
+ * When decoding sequences, we differentiate between a sequences created
+ * in a (running) transaction, and sequences created in other (already
+ * committed) transactions. Changes for sequences created in the same
+ * top-level transaction are treated as "transactional" i.e. just like
+ * any other change from that transaction (and discarded in case of a
+ * rollback). Changes for sequences created earlier are treated as not
+ * transactional - are processed immediately, as if performed outside
+ * any transaction (and thus not rolled back).
+ *
+ * This mixed behavior is necessary - sequences are non-transactional
+ * (e.g. ROLLBACK does not undo the sequence increments). But for new
+ * sequences, we need to handle them in a transactional way, because if
+ * we ever get some DDL support, the sequence won't exist until the
+ * transaction gets applied. So we need to ensure the increments don't
+ * happen until the sequence gets created.
+ *
+ * To differentiate which sequences are "old" and which were created
+ * in a still-running transaction, we track sequences created in running
+ * transactions in a hash table. Each sequence is identified by it's
+ * relfilenode, and at transaction commit we discard all sequences
+ * created in that particular transaction. For each sequence we track
+ * the XID of the (sub)transaction that created it, and we cleanup the
+ * sequences for each subtransaction when it completes (commit/rollback).
+ *
+ * We don't use the XID to check if it's the same top-level transaction.
+ * It's enough to know it was created in an in-progress transaction,
+ * and we know it must be the current one because otherwise it wouldn't
+ * see the sequence object.
+ *
* -------------------------------------------------------------------------
*/
#include "postgres.h"
@@ -91,6 +120,7 @@
#include "access/xact.h"
#include "access/xlog_internal.h"
#include "catalog/catalog.h"
+#include "commands/sequence.h"
#include "lib/binaryheap.h"
#include "miscadmin.h"
#include "pgstat.h"
@@ -116,6 +146,13 @@ typedef struct ReorderBufferTXNByIdEnt
ReorderBufferTXN *txn;
} ReorderBufferTXNByIdEnt;
+/* entry for hash table we use to track sequences created in running xacts */
+typedef struct ReorderBufferSequenceEnt
+{
+ RelFileNode rnode;
+ TransactionId xid;
+} ReorderBufferSequenceEnt;
+
/* data structures for (relfilenode, ctid) => (cmin, cmax) mapping */
typedef struct ReorderBufferTupleCidKey
{
@@ -339,6 +376,14 @@ ReorderBufferAllocate(void)
buffer->by_txn = hash_create("ReorderBufferByXid", 1000, &hash_ctl,
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+ /* hash table of sequences, mapping relfilenode to XID of transaction */
+ hash_ctl.keysize = sizeof(RelFileNode);
+ hash_ctl.entrysize = sizeof(ReorderBufferSequenceEnt);
+ hash_ctl.hcxt = buffer->context;
+
+ buffer->sequences = hash_create("ReorderBufferSequenceHash", 1000, &hash_ctl,
+ HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+
buffer->by_txn_last_xid = InvalidTransactionId;
buffer->by_txn_last_txn = NULL;
@@ -525,6 +570,13 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
change->data.truncate.relids = NULL;
}
break;
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
+ change->data.sequence.tuple = NULL;
+ }
+ break;
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
@@ -859,6 +911,231 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
}
}
+/*
+ * Treat the sequence increment as transactional?
+ *
+ * The hash table tracks all sequences created in in-progress transactions,
+ * so we simply do a lookup (the sequence is identified by relfilende). If
+ * we find a match, the increment should be handled as transactional.
+ */
+bool
+ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created)
+{
+ bool found = false;
+
+ if (created)
+ return true;
+
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND,
+ &found);
+
+ return found;
+}
+
+/*
+ * Cleanup sequences created in in-progress transactions.
+ *
+ * There's no way to search by XID, so we simply do a seqscan of all
+ * the entries in the hash table. Hopefully there are only a couple
+ * entries in most cases - people generally don't create many new
+ * sequences over and over.
+ */
+static void
+ReorderBufferSequenceCleanup(ReorderBuffer *rb, TransactionId xid)
+{
+ HASH_SEQ_STATUS scan_status;
+ ReorderBufferSequenceEnt *ent;
+
+ hash_seq_init(&scan_status, rb->sequences);
+ while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
+ {
+ /* skip sequences not from this transaction */
+ if (ent->xid != xid)
+ continue;
+
+ (void) hash_search(rb->sequences,
+ (void *) &(ent->rnode),
+ HASH_REMOVE, NULL);
+ }
+}
+
+/*
+ * A transactional sequence increment is queued to be processed upon commit
+ * and a non-transactional increment gets processed immediately.
+ *
+ * A sequence update may be both transactional and non-transactional. When
+ * created in a running transaction, treat it as transactional and queue
+ * the change in it. Otherwise treat it as non-transactional, so that we
+ * don't forget the increment in case of a rollback.
+ */
+void
+ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf)
+{
+ /*
+ * Change needs to be handled as transactional, because the sequence was
+ * created in a transaction that is still running. In that case all the
+ * changes need to be queued in that transaction, we must not send them
+ * to the downstream until the transaction commits.
+ *
+ * There's a bit of a trouble with subtransactions - we can't queue it
+ * into the subxact, because it might be rolled back and we'd lose the
+ * increment. We need to queue it into the same (sub)xact that created
+ * the sequence, which is why we track the XID in the hash table.
+ */
+ if (transactional)
+ {
+ MemoryContext oldcontext;
+ ReorderBufferChange *change;
+
+ /* lookup sequence by relfilenode */
+ ReorderBufferSequenceEnt *ent;
+ bool found;
+
+ /* transactional changes require a transaction */
+ Assert(xid != InvalidTransactionId);
+
+ /* search the lookup table (we ignore the return value, found is enough) */
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ created ? HASH_ENTER : HASH_FIND,
+ &found);
+
+ /*
+ * If this is the "create" increment, we must not have found any
+ * pre-existing entry in the hash table (i.e. there must not be
+ * any conflicting sequence).
+ */
+ Assert(!(created && found));
+
+ /* But we must have either created or found an existing entry. */
+ Assert(created || found);
+
+ /*
+ * When creating the sequence, remember the XID of the transaction
+ * that created id.
+ */
+ if (created)
+ ent->xid = xid;
+
+ /* XXX Maybe check that we're still in the same top-level xact? */
+
+ /* OK, allocate and queue the change */
+ oldcontext = MemoryContextSwitchTo(rb->context);
+
+ change = ReorderBufferGetChange(rb);
+
+ change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
+ change->origin_id = origin_id;
+
+ memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
+
+ change->data.sequence.created = created;
+ change->data.sequence.tuple = tuplebuf;
+
+ /* add it to the same subxact that created the sequence */
+ ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+ else
+ {
+ /*
+ * This increment is for a sequence that was not created in any
+ * running transaction, so we treat it as non-transactional and
+ * just send it to the output plugin directly.
+ */
+ ReorderBufferTXN *txn = NULL;
+ volatile Snapshot snapshot_now = snapshot;
+ bool using_subtxn;
+
+#ifdef USE_ASSERT_CHECKING
+ /* All "creates" have to be handled as transactional. */
+ Assert(!created);
+
+ /* Make sure the sequence is not in the hash table. */
+ {
+ bool found;
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND, &found);
+ Assert(!found);
+ }
+#endif
+
+ if (xid != InvalidTransactionId)
+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
+
+ /* setup snapshot to allow catalog access */
+ SetupHistoricSnapshot(snapshot_now, NULL);
+
+ /*
+ * Decoding needs access to syscaches et al., which in turn use
+ * heavyweight locks and such. Thus we need to have enough state around to
+ * keep track of those. The easiest way is to simply use a transaction
+ * internally. That also allows us to easily enforce that nothing writes
+ * to the database by checking for xid assignments.
+ *
+ * When we're called via the SQL SRF there's already a transaction
+ * started, so start an explicit subtransaction there.
+ */
+ using_subtxn = IsTransactionOrTransactionBlock();
+
+ PG_TRY();
+ {
+ Relation relation;
+ HeapTuple tuple;
+ Form_pg_sequence_data seq;
+ Oid reloid;
+
+ if (using_subtxn)
+ BeginInternalSubTransaction("sequence");
+ else
+ StartTransactionCommand();
+
+ reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(rnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+ tuple = &tuplebuf->tuple;
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+
+ rb->sequence(rb, txn, lsn, relation, transactional, created,
+ seq->last_value, seq->log_cnt, seq->is_called);
+
+ RelationClose(relation);
+
+ TeardownHistoricSnapshot(false);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+ }
+ PG_CATCH();
+ {
+ TeardownHistoricSnapshot(true);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+ }
+}
+
/*
* AssertTXNLsnOrder
* Verify LSN ordering of transaction lists in the reorderbuffer
@@ -1535,6 +1812,9 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
&found);
Assert(found);
+ /* Remove sequences created in this transaction (if any). */
+ ReorderBufferSequenceCleanup(rb, txn->xid);
+
/* remove entries spilled to disk */
if (rbtxn_is_serialized(txn))
ReorderBufferRestoreCleanup(rb, txn);
@@ -1950,6 +2230,31 @@ ReorderBufferApplyMessage(ReorderBuffer *rb, ReorderBufferTXN *txn,
change->data.msg.message);
}
+/*
+ * Helper function for ReorderBufferProcessTXN for applying sequences.
+ */
+static inline void
+ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ Relation relation, ReorderBufferChange *change,
+ bool streaming)
+{
+ HeapTuple tuple;
+ Form_pg_sequence_data seq;
+
+ tuple = &change->data.sequence.tuple->tuple;
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+
+ /* Only ever called from ReorderBufferApplySequence, so transational. */
+ if (streaming)
+ rb->stream_sequence(rb, txn, change->lsn, relation, true,
+ change->data.sequence.created,
+ seq->last_value, seq->log_cnt, seq->is_called);
+ else
+ rb->sequence(rb, txn, change->lsn, relation, true,
+ change->data.sequence.created,
+ seq->last_value, seq->log_cnt, seq->is_called);
+}
+
/*
* Function to store the command id and snapshot at the end of the current
* stream so that we can reuse the same while sending the next stream.
@@ -2392,6 +2697,31 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
elog(ERROR, "tuplecid value in changequeue");
break;
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ Assert(snapshot_now);
+
+ reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
+ change->data.sequence.relnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+
+ if (!RelationIsValid(relation))
+ elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
+ reloid,
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ if (RelationIsLogicallyLogged(relation))
+ ReorderBufferApplySequence(rb, txn, relation, change, streaming);
+
+ RelationClose(relation);
+ break;
}
}
@@ -3776,6 +4106,39 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ char *data;
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
+ /* make sure we have enough space */
+ ReorderBufferSerializeReserve(rb, sz);
+
+ data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
+ /* might have been reallocated above */
+ ondisk = (ReorderBufferDiskChange *) rb->outbuf;
+
+ if (len)
+ {
+ memcpy(data, &tup->tuple, sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ memcpy(data, tup->tuple.t_data, len);
+ data += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4040,6 +4403,22 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4341,6 +4720,30 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ uint32 tuplelen = ((HeapTuple) data)->t_len;
+
+ change->data.sequence.tuple =
+ ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
+
+ /* restore ->tuple */
+ memcpy(&change->data.sequence.tuple->tuple, data,
+ sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ /* reset t_data pointer into the new tuplebuf */
+ change->data.sequence.tuple->tuple.t_data =
+ ReorderBufferTupleBufData(change->data.sequence.tuple);
+
+ /* restore tuple data itself */
+ memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
+ data += tuplelen;
+ }
+ break;
+
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 40544dd4c7..5919fb90ee 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -48,6 +48,7 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
typedef struct xl_seq_rec
{
RelFileNode node;
+ bool created; /* is this a CREATE SEQUENCE */
/* SEQUENCE TUPLE DATA FOLLOWS AT THE END */
} xl_seq_rec;
diff --git a/src/include/replication/output_plugin.h b/src/include/replication/output_plugin.h
index 810495ed0e..57bc13f11c 100644
--- a/src/include/replication/output_plugin.h
+++ b/src/include/replication/output_plugin.h
@@ -88,6 +88,19 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
Size message_size,
const char *message);
+/*
+ * Called for the generic logical decoding sequences.
+ */
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ bool created,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Filter changes by origin.
*/
@@ -199,6 +212,20 @@ typedef void (*LogicalDecodeStreamMessageCB) (struct LogicalDecodingContext *ctx
Size message_size,
const char *message);
+/*
+ * Called for the streaming generic logical decoding sequences from in-progress
+ * transactions.
+ */
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ bool created,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Callback for streaming truncates from in-progress transactions.
*/
@@ -219,6 +246,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
@@ -237,6 +265,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index 5b40ff75f7..63e6ed037b 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -64,7 +64,8 @@ enum ReorderBufferChangeType
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT,
- REORDER_BUFFER_CHANGE_TRUNCATE
+ REORDER_BUFFER_CHANGE_TRUNCATE,
+ REORDER_BUFFER_CHANGE_SEQUENCE
};
/* forward declaration */
@@ -158,6 +159,14 @@ typedef struct ReorderBufferChange
uint32 ninvalidations; /* Number of messages */
SharedInvalidationMessage *invalidations; /* invalidation message */
} inval;
+
+ /* Context data for Sequence changes */
+ struct
+ {
+ RelFileNode relnode;
+ bool created;
+ ReorderBufferTupleBuf *tuple;
+ } sequence;
} data;
/*
@@ -430,6 +439,15 @@ typedef void (*ReorderBufferMessageCB) (ReorderBuffer *rb,
const char *prefix, Size sz,
const char *message);
+/* sequence callback signature */
+typedef void (*ReorderBufferSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* begin prepare callback signature */
typedef void (*ReorderBufferBeginPrepareCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn);
@@ -496,6 +514,15 @@ typedef void (*ReorderBufferStreamMessageCB) (
const char *prefix, Size sz,
const char *message);
+/* stream sequence callback signature */
+typedef void (*ReorderBufferStreamSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* stream truncate callback signature */
typedef void (*ReorderBufferStreamTruncateCB) (
ReorderBuffer *rb,
@@ -511,6 +538,12 @@ struct ReorderBuffer
*/
HTAB *by_txn;
+ /*
+ * relfilenode => XID lookup table for sequences created in a transaction
+ * (also includes altered sequences, which assigns new relfilenode)
+ */
+ HTAB *sequences;
+
/*
* Transactions that could be a toplevel xact, ordered by LSN of the first
* record bearing that xid.
@@ -541,6 +574,7 @@ struct ReorderBuffer
ReorderBufferApplyTruncateCB apply_truncate;
ReorderBufferCommitCB commit;
ReorderBufferMessageCB message;
+ ReorderBufferSequenceCB sequence;
/*
* Callbacks to be called when streaming a transaction at prepare time.
@@ -560,6 +594,7 @@ struct ReorderBuffer
ReorderBufferStreamCommitCB stream_commit;
ReorderBufferStreamChangeCB stream_change;
ReorderBufferStreamMessageCB stream_message;
+ ReorderBufferStreamSequenceCB stream_sequence;
ReorderBufferStreamTruncateCB stream_truncate;
/*
@@ -635,6 +670,10 @@ void ReorderBufferQueueChange(ReorderBuffer *, TransactionId,
void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapshot, XLogRecPtr lsn,
bool transactional, const char *prefix,
Size message_size, const char *message);
+void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
@@ -682,4 +721,7 @@ void ReorderBufferSetRestartPoint(ReorderBuffer *, XLogRecPtr ptr);
void StartupReorderBuffer(void);
+bool ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created);
+
#endif
--
2.31.1
0002-Add-support-for-decoding-sequences-to-test_-20211208.patchtext/x-patch; charset=UTF-8; name=0002-Add-support-for-decoding-sequences-to-test_-20211208.patchDownload
From 936470a0cf0eaa56c90b32b23c25dc317a1d3a55 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 24 Sep 2021 00:42:04 +0200
Subject: [PATCH 2/2] Add support for decoding sequences to test_decoding
---
contrib/test_decoding/Makefile | 3 +-
contrib/test_decoding/expected/sequence.out | 327 ++++++++++++++++++++
contrib/test_decoding/sql/sequence.sql | 119 +++++++
contrib/test_decoding/test_decoding.c | 69 +++++
4 files changed, 517 insertions(+), 1 deletion(-)
create mode 100644 contrib/test_decoding/expected/sequence.out
create mode 100644 contrib/test_decoding/sql/sequence.sql
diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index 9a31e0b879..56ddc3abae 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -5,7 +5,8 @@ PGFILEDESC = "test_decoding - example of a logical decoding output plugin"
REGRESS = ddl xact rewrite toast permissions decoding_in_xact \
decoding_into_rel binary prepared replorigin time messages \
- spill slot truncate stream stats twophase twophase_stream
+ spill slot truncate stream stats twophase twophase_stream \
+ sequence
ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \
oldest_xmin snapshot_transfer subxact_without_top concurrent_stream \
twophase_snapshot
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
new file mode 100644
index 0000000000..d94e185fb9
--- /dev/null
+++ b/contrib/test_decoding/expected/sequence.out
@@ -0,0 +1,327 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+ ?column?
+----------
+ init
+(1 row)
+
+CREATE SEQUENCE test_sequence;
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4
+(1 row)
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 14
+(1 row)
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4000
+(1 row)
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, true);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, false);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3500
+(1 row)
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:1 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 created:0 last_value:33 log_cnt:0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:4 log_cnt:0 is_called:1
+ COMMIT
+ sequence public.test_sequence: transactional:0 created:0 last_value:334 log_cnt:0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:14 log_cnt:32 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:14 log_cnt:0 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:4000 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 created:0 last_value:4320 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3500 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3500 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 created:0 last_value:3500 log_cnt:0 is_called:0
+ sequence public.test_sequence: transactional:0 created:0 last_value:3820 log_cnt:0 is_called:1
+(24 rows)
+
+DROP SEQUENCE test_sequence;
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3002
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3003
+(1 row)
+
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 6001
+(1 row)
+
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3003 | 30 | t
+(1 row)
+
+SELECT nextval('test_table_a_seq');
+ nextval
+---------
+ 3004
+(1 row)
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+---------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 created:1 last_value:1 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_table_a_seq: transactional:0 created:0 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 created:0 last_value:3000 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 created:0 last_value:3033 log_cnt:0 is_called:1
+ BEGIN
+ COMMIT
+(8 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+-------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 created:1 last_value:1 log_cnt:0 is_called:0
+ sequence public.test_table_a_seq: transactional:1 created:0 last_value:33 log_cnt:0 is_called:1
+ table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ COMMIT
+(6 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ setval
+--------
+ 5000
+(1 row)
+
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3001 | 32 | t
+(1 row)
+
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+------------------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 created:1 last_value:1 log_cnt:0 is_called:0
+ sequence public.test_sequence: transactional:1 created:0 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 created:0 last_value:3000 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 created:0 last_value:3033 log_cnt:0 is_called:1
+ COMMIT
+(6 rows)
+
+SELECT pg_drop_replication_slot('regression_slot');
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
new file mode 100644
index 0000000000..21c4b79222
--- /dev/null
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -0,0 +1,119 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+
+CREATE SEQUENCE test_sequence;
+
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, true);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, false);
+SELECT nextval('test_sequence');
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+DROP SEQUENCE test_sequence;
+
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+SELECT nextval('test_table_a_seq');
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index e5cd84e85e..45765d299e 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -35,6 +35,7 @@ typedef struct
bool include_timestamp;
bool skip_empty_xacts;
bool only_local;
+ bool skip_sequences;
} TestDecodingData;
/*
@@ -76,6 +77,10 @@ static void pg_decode_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
TransactionId xid,
const char *gid);
@@ -116,6 +121,10 @@ static void pg_decode_stream_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_stream_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called);
static void pg_decode_stream_truncate(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
int nrelations, Relation relations[],
@@ -141,6 +150,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->filter_by_origin_cb = pg_decode_filter;
cb->shutdown_cb = pg_decode_shutdown;
cb->message_cb = pg_decode_message;
+ cb->sequence_cb = pg_decode_sequence;
cb->filter_prepare_cb = pg_decode_filter_prepare;
cb->begin_prepare_cb = pg_decode_begin_prepare_txn;
cb->prepare_cb = pg_decode_prepare_txn;
@@ -153,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pg_decode_stream_commit;
cb->stream_change_cb = pg_decode_stream_change;
cb->stream_message_cb = pg_decode_stream_message;
+ cb->stream_sequence_cb = pg_decode_stream_sequence;
cb->stream_truncate_cb = pg_decode_stream_truncate;
}
@@ -175,6 +186,9 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
data->skip_empty_xacts = false;
data->only_local = false;
+ /* skip sequences by default for backwards compatibility */
+ data->skip_sequences = true;
+
ctx->output_plugin_private = data;
opt->output_type = OUTPUT_PLUGIN_TEXTUAL_OUTPUT;
@@ -265,6 +279,17 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "skip-sequences") == 0)
+ {
+
+ if (elem->arg == NULL)
+ continue; /* true by default */
+ else if (!parse_bool(strVal(elem->arg), &data->skip_sequences))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not parse value \"%s\" for parameter \"%s\"",
+ strVal(elem->arg), elem->defname)));
+ }
else
{
ereport(ERROR,
@@ -744,6 +769,28 @@ pg_decode_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d created:%d last_value:%zu log_cnt:%zu is_called:%d",
+ transactional, created, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
static void
pg_decode_stream_start(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn)
@@ -943,6 +990,28 @@ pg_decode_stream_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_stream_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional, bool created,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "streaming sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d created:%d last_value:%zu log_cnt:%zu is_called:%d",
+ transactional, created, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* In streaming mode, we don't display the detailed information of Truncate.
* See pg_decode_stream_change.
--
2.31.1
On 08.12.21 01:23, Tomas Vondra wrote:
The argument "create" for fill_seq_with_data() is always true (and
patch 0002 removes it). So I'm not sure if it's needed. If it is, it
should be documented somewhere.Good point. I think it could be removed, but IIRC it'll be needed when
adding sequence decoding to the built-in replication, and that patch is
meant to be just an implementation of the API, without changing WAL.OTOH I don't see it in the last version of that patch (in ResetSequence2
call) so maybe it's not really needed. I've kept it for now, but I'll
investigate.
Ok, please check. If it is needed, perhaps then we need a way for
test_decoding() to simulate it, for testing. But perhaps it's not needed.
The order of the new fields sequence_cb and stream_sequence_cb is a
bit inconsistent compared to truncate_cb and stream_truncate_cb.
Maybe take another look to make the order more uniform.You mean in OutputPluginCallbacks? I'd actually argue it's the truncate
callbacks that are inconsistent - in the regular section truncate_cb is
right before commit_cb, while in the streaming section it's at the end.
Ok, that makes sense. Then leave yours.
When the question about fill_seq_with_data() is resolved, I have no more
comments on this part.
Hi,
here's an updated version of the patches, dealing with almost all of the
issues (at least in the 0001 and 0002 parts). The main changes:
1) I've removed the 'created' flag from fill_seq_with_data, as
discussed. I don't think it's needed by any of the parts (not even 0003,
AFAICS). We still need it in xl_seq_rec, though.
2) GetCurrentTransactionId() added to sequence.c are called only with
wal_level=logical, to minimize the overhead.
There's still one remaining problem, that I already explained in [1]/messages/by-id/3d6df331-5532-6848-eb45-344b265e0238@enterprisedb.com.
The problem is that with this:
BEGIN;
SELECT nextval('s') FROM generate_series(1,100);
ROLLBACK;
The root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write,
which is updated by XLogFlush() - but only in RecordTransactionCommit.
Which makes sense, because only the committed stuff is "visible".
But the non-transactional behavior of sequence decoding disagrees with
this, because now some of the changes from aborted transactions may be
replicated. Which means the wait_for_catchup() ends up not waiting for
the sequence change to be replicated. This is an issue for tests in
patch 0003, at least.
My concern is this actually affects other places waiting for things
getting replicated :-/
I have outlined some ideas how to deal with this in [1]/messages/by-id/3d6df331-5532-6848-eb45-344b265e0238@enterprisedb.com, but we got
sidetracked by exploring the alternative approach.
regards
[1]: /messages/by-id/3d6df331-5532-6848-eb45-344b265e0238@enterprisedb.com
/messages/by-id/3d6df331-5532-6848-eb45-344b265e0238@enterprisedb.com
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0003-Add-support-for-decoding-sequences-to-built-20211214.patchtext/x-patch; charset=UTF-8; name=0003-Add-support-for-decoding-sequences-to-built-20211214.patchDownload
From 27522e687bca7c067611343e4d22f57deeb9d03c Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas@2ndquadrant.com>
Date: Tue, 14 Dec 2021 00:29:09 +0100
Subject: [PATCH 3/3] Add support for decoding sequences to built-in
replication
---
doc/src/sgml/catalogs.sgml | 71 ++++
doc/src/sgml/ref/alter_publication.sgml | 24 +-
doc/src/sgml/ref/alter_subscription.sgml | 4 +-
src/backend/catalog/pg_publication.c | 149 ++++++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 350 +++++++++++++++++++-
src/backend/commands/sequence.c | 79 +++++
src/backend/commands/subscriptioncmds.c | 272 +++++++++++++++
src/backend/executor/execReplication.c | 2 +-
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 1 +
src/backend/parser/gram.y | 32 ++
src/backend/replication/logical/proto.c | 52 +++
src/backend/replication/logical/tablesync.c | 118 ++++++-
src/backend/replication/logical/worker.c | 60 ++++
src/backend/replication/pgoutput/pgoutput.c | 85 ++++-
src/backend/utils/cache/relcache.c | 4 +-
src/bin/psql/tab-complete.c | 6 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 14 +
src/include/commands/sequence.h | 1 +
src/include/nodes/parsenodes.h | 6 +
src/include/replication/logicalproto.h | 19 ++
src/include/replication/pgoutput.h | 1 +
src/test/regress/expected/rules.out | 8 +
src/test/subscription/t/027_sequences.pl | 196 +++++++++++
26 files changed, 1538 insertions(+), 32 deletions(-)
create mode 100644 src/test/subscription/t/027_sequences.pl
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 03e2537b07d..0dfeb36f03f 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -9514,6 +9514,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11349,6 +11354,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index bb4ef5e5e22..4d166ad3f9c 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCE IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -56,7 +58,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -123,6 +136,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 0b027cc3462..8f28cf03f40 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -147,7 +147,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ replication of tables and sequences that were added to the subscribed-to publications
since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -164,7 +164,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
Specifies whether to copy pre-existing data in the publications
that are being subscribed to when the replication starts.
The default is <literal>true</literal>. (Previously-subscribed
- tables are not copied.)
+ tables and sequences are not copied.)
</para>
</listitem>
</varlistentry>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 62f10bcbd28..6844ac61897 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -54,7 +54,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -131,7 +132,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -502,6 +504,11 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ /* skip sequences here */
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ continue;
+
result = GetPubPartitionOptionRelations(result, pub_partopt,
pubrel->prrelid);
}
@@ -516,6 +523,49 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of relation oids for a publication (sequences only).
+ *
+ * This should only be used for normal publications, the FOR ALL TABLES
+ * should use GetAllSequencesPublicationRelations().
+ */
+List *
+GetPublicationSequenceRelations(Oid pubid)
+{
+ List *result;
+ Relation pubrelsrel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications associated with the relation. */
+ pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_rel_prpubid,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(pubid));
+
+ scan = systable_beginscan(pubrelsrel, PublicationRelPrrelidPrpubidIndexId,
+ true, NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Form_pg_publication_rel pubrel;
+
+ pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ }
+
+ systable_endscan(scan);
+ table_close(pubrelsrel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of publication oids for publications marked as FOR ALL TABLES.
*/
@@ -761,6 +811,46 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -783,10 +873,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -936,3 +1028,56 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ sequences = GetPublicationSequenceRelations(publication->oid);
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 61b515cdb85..0df91d7a5ad 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -372,6 +372,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPT.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 404bb5d0c87..2c183d63f7d 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -59,6 +60,12 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
AlterPublicationStmt *stmt);
static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static List *OpenSequenceList(List *sequences);
+static void CloseSequenceList(List *rels);
+static void PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt);
+static void PublicationDropSequences(Oid pubid, List *rels, bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
List *options,
@@ -77,6 +84,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -101,6 +109,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -123,6 +132,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -149,7 +160,9 @@ parse_publication_options(ParseState *pstate,
*/
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas,
+ List **schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -167,12 +180,23 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLE_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE_IN_SCHEMA:
+ schemaid = get_namespace_oid(pubobj->name, false);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLE_IN_CUR_SCHEMA:
@@ -186,6 +210,21 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
list_free(search_path);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE_IN_CUR_SCHEMA:
+ search_path = fetch_search_path(false);
+ if (search_path == NIL) /* nothing valid in search_path? */
+ ereport(ERROR,
+ errcode(ERRCODE_UNDEFINED_SCHEMA),
+ errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+ schemaid = linitial_oid(search_path);
+ list_free(search_path);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
default:
@@ -251,7 +290,10 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
bool publish_via_partition_root_given;
bool publish_via_partition_root;
AclResult aclresult;
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
/* must have CREATE privilege on database */
@@ -306,6 +348,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -330,26 +374,40 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
}
else
{
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
- if (list_length(schemaidlist) > 0 && !superuser())
+ if (list_length(tables_schemaidlist) > 0 && !superuser())
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES IN SCHEMA publication"));
- if (list_length(relations) > 0)
+ if (list_length(tables) > 0)
{
List *rels;
- rels = OpenTableList(relations);
- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ rels = OpenTableList(tables);
+ CheckObjSchemaNotAlreadyInPublication(rels, tables_schemaidlist,
PUBLICATIONOBJ_TABLE);
PublicationAddTables(puboid, rels, true, NULL);
CloseTableList(rels);
}
+ if (list_length(sequences) > 0)
+ {
+ List *rels;
+
+ rels = OpenSequenceList(sequences);
+ CheckObjSchemaNotAlreadyInPublication(rels, sequences_schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddTables(puboid, rels, true, NULL);
+ CloseSequenceList(rels);
+ }
+
if (list_length(schemaidlist) > 0)
{
/*
@@ -651,12 +709,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
*/
static void
CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
- List *tables, List *schemaidlist)
+ List *tables, List *tables_schemaidlist,
+ List *sequences, List *sequences_schemaidlist)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
if ((stmt->action == DEFELEM_ADD || stmt->action == DEFELEM_SET) &&
- schemaidlist && !superuser())
+ (tables_schemaidlist || sequences_schemaidlist) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to add or set schemas")));
@@ -665,13 +724,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
* Check that user is allowed to manipulate the publication tables in
* schema
*/
- if (schemaidlist && pubform->puballtables)
+ if (tables_schemaidlist && pubform->puballtables)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables from schema cannot be added to, dropped from, or set on FOR ALL TABLES publications.")));
+ /*
+ * Check that user is allowed to manipulate the publication sequences in
+ * schema
+ */
+ if (sequences_schemaidlist && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
/* Check that user is allowed to manipulate the publication tables. */
if (tables && pubform->puballtables)
ereport(ERROR,
@@ -679,6 +749,108 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (sequences && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+ List *sequences, List *schemaidlist)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /*
+ * It is quite possible that for the SET case user has not specified any
+ * tables in which case we need to remove all the existing tables.
+ */
+ if (!sequences && stmt->action != DEFELEM_SET)
+ return;
+
+ rels = OpenSequenceList(sequences);
+
+ if (stmt->action == DEFELEM_ADD)
+ {
+ List *schemas = NIL;
+
+ /*
+ * Check if the relation is member of the existing schema in the
+ * publication or member of the schema list specified.
+ */
+ schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid));
+ CheckObjSchemaNotAlreadyInPublication(rels, schemas,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddSequences(pubid, rels, false, stmt);
+ }
+ else if (stmt->action == DEFELEM_DROP)
+ PublicationDropSequences(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ PublicationRelInfo *newpubrel;
+
+ newpubrel = (PublicationRelInfo *) lfirst(newlc);
+ if (RelationGetRelid(newpubrel->relation) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+ /* Not yet in the list, open it and add to the list */
+ if (!found)
+ {
+ Relation oldrel;
+ PublicationRelInfo *pubrel;
+
+ /* Wrap relation into PublicationRelInfo */
+ oldrel = table_open(oldrelid, ShareUpdateExclusiveLock);
+
+ pubrel = palloc(sizeof(PublicationRelInfo));
+ pubrel->relation = oldrel;
+
+ delrels = lappend(delrels, pubrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropSequences(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddSequences(pubid, rels, true, stmt);
+
+ CloseSequenceList(delrels);
+ }
+
+ CloseSequenceList(rels);
}
/*
@@ -716,13 +888,21 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
AlterPublicationOptions(pstate, stmt, rel, tup);
else
{
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
- CheckAlterPublication(stmt, tup, relations, schemaidlist);
+ CheckAlterPublication(stmt, tup,
+ tables, tables_schemaidlist,
+ sequences, sequences_schemaidlist);
/*
* Lock the publication so nobody else can do anything with it. This
@@ -747,7 +927,9 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
errmsg("publication \"%s\" does not exist",
stmt->pubname));
- AlterPublicationTables(stmt, tup, relations, schemaidlist);
+ AlterPublicationTables(stmt, tup, tables, tables_schemaidlist);
+ AlterPublicationSequences(stmt, tup, sequences, sequences_schemaidlist);
+
AlterPublicationSchemas(stmt, tup, schemaidlist);
}
@@ -1155,6 +1337,144 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
}
}
+/*
+ * Open relations specified by a PublicationTable list.
+ * In the returned list of PublicationRelInfo, tables are locked
+ * in ShareUpdateExclusiveLock mode in order to add them to a publication.
+ */
+static List *
+OpenSequenceList(List *sequences)
+{
+ List *relids = NIL;
+ List *rels = NIL;
+ ListCell *lc;
+
+ /*
+ * Open, share-lock, and check all the explicitly-specified relations
+ */
+ foreach(lc, sequences)
+ {
+ PublicationTable *s = lfirst_node(PublicationTable, lc);
+ Relation rel;
+ Oid myrelid;
+ PublicationRelInfo *pub_rel;
+
+ /* Allow query cancel in case this takes a long time */
+ CHECK_FOR_INTERRUPTS();
+
+ rel = table_openrv(s->relation, ShareUpdateExclusiveLock);
+ myrelid = RelationGetRelid(rel);
+
+ /*
+ * Filter out duplicates if user specifies "foo, foo".
+ *
+ * Note that this algorithm is known to not be very efficient (O(N^2))
+ * but given that it only works on list of tables given to us by user
+ * it's deemed acceptable.
+ */
+ if (list_member_oid(relids, myrelid))
+ {
+ table_close(rel, ShareUpdateExclusiveLock);
+ continue;
+ }
+
+ pub_rel = palloc(sizeof(PublicationRelInfo));
+ pub_rel->relation = rel;
+ rels = lappend(rels, pub_rel);
+ relids = lappend_oid(relids, myrelid);
+ }
+
+ list_free(relids);
+
+ return rels;
+}
+
+/*
+ * Close all relations in the list.
+ */
+static void
+CloseSequenceList(List *rels)
+{
+ ListCell *lc;
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel;
+
+ pub_rel = (PublicationRelInfo *) lfirst(lc);
+ table_close(pub_rel->relation, NoLock);
+ }
+}
+
+/*
+ * Add listed tables to the publication.
+ */
+static void
+PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt)
+{
+ ListCell *lc;
+
+ Assert(!stmt || !stmt->for_all_sequences);
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel = (PublicationRelInfo *) lfirst(lc);
+ Relation rel = pub_rel->relation;
+ ObjectAddress obj;
+
+ /* Must be owner of the sequence or superuser. */
+ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind),
+ RelationGetRelationName(rel));
+
+ obj = publication_add_relation(pubid, pub_rel, if_not_exists);
+ if (stmt)
+ {
+ EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
+ (Node *) stmt);
+
+ InvokeObjectPostCreateHook(PublicationRelRelationId,
+ obj.objectId, 0);
+ }
+ }
+}
+
+/*
+ * Remove listed sequences from the publication.
+ */
+static void
+PublicationDropSequences(Oid pubid, List *rels, bool missing_ok)
+{
+ ObjectAddress obj;
+ ListCell *lc;
+ Oid prid;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ Oid relid = RelationGetRelid(rel);
+
+ prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
+ ObjectIdGetDatum(relid),
+ ObjectIdGetDatum(pubid));
+ if (!OidIsValid(prid))
+ {
+ if (missing_ok)
+ continue;
+
+ ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+ errmsg("relation \"%s\" is not part of the publication",
+ RelationGetRelationName(rel))));
+ }
+
+ ObjectAddressSet(obj, PublicationRelRelationId, prid);
+ performDeletion(&obj, DROP_CASCADE, 0);
+ }
+}
+
+
/*
* Internal workhorse for changing a publication owner
*/
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 626b5b252f4..b8888213f54 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -336,6 +336,85 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*/
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 2b658080fee..fc6374c4450 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -85,6 +85,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -496,6 +497,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -534,6 +536,26 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -706,6 +728,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -797,6 +823,183 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1614,6 +1817,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 574d7d27fd9..6f0d42d5519 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -608,7 +608,7 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index df0b7478838..6a5718572f9 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4846,6 +4846,7 @@ _copyCreatePublicationStmt(const CreatePublicationStmt *from)
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
COPY_SCALAR_FIELD(for_all_tables);
+ COPY_SCALAR_FIELD(for_all_sequences);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index cb7ddd463cb..5dccc0d0adc 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2336,6 +2336,7 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
COMPARE_SCALAR_FIELD(action);
return true;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 3d4dd43e47b..f037c17985b 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -9762,6 +9762,26 @@ PublicationObjSpec:
$$->pubobjtype = PUBLICATIONOBJ_TABLE_IN_CUR_SCHEMA;
$$->location = @5;
}
+ | SEQUENCE relation_expr
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+ $$->pubtable = makeNode(PublicationTable);
+ $$->pubtable->relation = $2;
+ }
+ | ALL SEQUENCE IN_P SCHEMA ColId
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE_IN_SCHEMA;
+ $$->name = $5;
+ $$->location = @5;
+ }
+ | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE_IN_CUR_SCHEMA;
+ $$->location = @5;
+ }
| ColId
{
$$ = makeNode(PublicationObjSpec);
@@ -10097,6 +10117,12 @@ UnlistenStmt:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
@@ -10105,6 +10131,12 @@ UnlistenStmt:
* BEGIN / COMMIT / ROLLBACK
* (also older versions END / ABORT)
*
+ * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]
+ *
*****************************************************************************/
TransactionStmt:
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 9f5bf4b639f..79c6f626388 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -647,6 +647,56 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1203,6 +1253,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index f07983a43cb..24369a1522e 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -357,6 +358,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -871,6 +878,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ elog(WARNING, "sequence %s info last_value %ld log_cnt %ld is_called %d",
+ quote_qualified_identifier(lrel.nspname, lrel.relname),
+ last_value, log_cnt, is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1106,10 +1206,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 2e79302a48a..089d989dd73 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -144,6 +144,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -1091,6 +1092,61 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by ResetSequence2 */
+ elog(WARNING, "locking sequence %d in exclusive mode", relid);
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ elog(WARNING, "applying sequence %s.%s transactional %d last_value %ld log_cnt %ld is_called %d",
+ seq.nspname, seq.seqname, seq.transactional, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2374,6 +2430,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 6f6a203dea7..9ab8b1d4ec4 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -49,6 +49,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -159,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -175,6 +180,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -188,6 +194,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -195,6 +202,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -260,6 +268,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -856,6 +874,51 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(rel))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, RelationGetRelid(rel));
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ rel,
+ xid,
+ sequence_lsn,
+ transactional,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1140,7 +1203,8 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->streamed_txns = NIL;
entry->replicate_valid = false;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->publish_as_relid = InvalidOid;
entry->map = NULL; /* will be set by maybe_send_schema() if
* needed */
@@ -1160,6 +1224,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
List *schemaPubids = GetSchemaPublications(schemaId);
ListCell *lc;
Oid publish_as_relid = relid;
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1183,12 +1248,23 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1243,10 +1319,12 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
}
if (entry->pubactions.pubinsert && entry->pubactions.pubupdate &&
- entry->pubactions.pubdelete && entry->pubactions.pubtruncate)
+ entry->pubactions.pubdelete && entry->pubactions.pubtruncate &&
+ entry->pubactions.pubsequence)
break;
}
@@ -1391,6 +1469,7 @@ rel_sync_cache_publication_cb(Datum arg, int cacheid, uint32 hashvalue)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
}
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 105d8d4601c..1f8634b50a1 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5585,6 +5585,7 @@ GetRelationPublicationActions(Relation relation)
pubactions->pubupdate |= pubform->pubupdate;
pubactions->pubdelete |= pubform->pubdelete;
pubactions->pubtruncate |= pubform->pubtruncate;
+ pubactions->pubsequence |= pubform->pubsequence;
ReleaseSysCache(tup);
@@ -5593,7 +5594,8 @@ GetRelationPublicationActions(Relation relation)
* other publications.
*/
if (pubactions->pubinsert && pubactions->pubupdate &&
- pubactions->pubdelete && pubactions->pubtruncate)
+ pubactions->pubdelete && pubactions->pubtruncate &&
+ pubactions->pubsequence)
break;
}
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 2f412ca3db3..e30bf7b1b55 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1647,13 +1647,13 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> DROP */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
- COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY(Query_for_list_of_schemas
" AND nspname != 'pg_catalog' "
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 79d787cd26a..cd929f6e0e2 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11531,6 +11531,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 902f2f2f0da..b44bb0e5e06 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct Publication
@@ -79,6 +89,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -121,6 +132,9 @@ extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
Oid relid);
+extern List *GetAllSequencesPublicationRelations(void);
+extern List *GetPublicationSequenceRelations(Oid pubid);
+
extern bool is_publishable_relation(Relation rel);
extern bool is_schema_publication(Oid pubid);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *targetrel,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 5919fb90ee1..c28e8695cb8 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4c5a8a39bf1..d32ba498a7c 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3653,6 +3653,10 @@ typedef enum PublicationObjSpecType
PUBLICATIONOBJ_TABLE_IN_SCHEMA, /* Tables in schema type */
PUBLICATIONOBJ_TABLE_IN_CUR_SCHEMA, /* Get the first element from
* search_path */
+ PUBLICATIONOBJ_SEQUENCE, /* Sequence type */
+ PUBLICATIONOBJ_SEQUENCE_IN_SCHEMA, /* Sequences in schema type */
+ PUBLICATIONOBJ_SEQUENCE_IN_CUR_SCHEMA, /* Get the first element from
+ * search_path */
PUBLICATIONOBJ_CONTINUATION /* Continuation of previous type */
} PublicationObjSpecType;
@@ -3672,6 +3676,7 @@ typedef struct CreatePublicationStmt
List *options; /* List of DefElem nodes */
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
} CreatePublicationStmt;
typedef struct AlterPublicationStmt
@@ -3688,6 +3693,7 @@ typedef struct AlterPublicationStmt
*/
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
DefElemAction action; /* What action to perform with the
* tables/schemas */
} AlterPublicationStmt;
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 83741dcf424..fc1782c125f 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -60,6 +60,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -117,6 +118,18 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -227,6 +240,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index 0dc460fb70a..97eb2a7ef15 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -28,6 +28,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index b58b062b10d..6eb4f110038 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1451,6 +1451,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gpt(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gpt.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/subscription/t/027_sequences.pl b/src/test/subscription/t/027_sequences.pl
new file mode 100644
index 00000000000..fd941619071
--- /dev/null
+++ b/src/test/subscription/t/027_sequences.pl
@@ -0,0 +1,196 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More tests => 6;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub WITH (slot_name = seq_sub_slot)"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a rolled-back transaction - should not be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the new sequence in a transaction, and roll it back - in this case
+# it should not be replicated at commit
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+$node_subscriber->stop('fast');
+$node_publisher->stop('fast');
--
2.31.1
0002-Add-support-for-decoding-sequences-to-test_-20211214.patchtext/x-patch; charset=UTF-8; name=0002-Add-support-for-decoding-sequences-to-test_-20211214.patchDownload
From 71e95aa7ea701e305c2a95516b6b37c22e0b5e7b Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 24 Sep 2021 00:42:04 +0200
Subject: [PATCH 2/3] Add support for decoding sequences to test_decoding
---
contrib/test_decoding/Makefile | 3 +-
contrib/test_decoding/expected/sequence.out | 327 ++++++++++++++++++++
contrib/test_decoding/sql/sequence.sql | 119 +++++++
contrib/test_decoding/test_decoding.c | 69 +++++
4 files changed, 517 insertions(+), 1 deletion(-)
create mode 100644 contrib/test_decoding/expected/sequence.out
create mode 100644 contrib/test_decoding/sql/sequence.sql
diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index 9a31e0b8795..56ddc3abaeb 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -5,7 +5,8 @@ PGFILEDESC = "test_decoding - example of a logical decoding output plugin"
REGRESS = ddl xact rewrite toast permissions decoding_in_xact \
decoding_into_rel binary prepared replorigin time messages \
- spill slot truncate stream stats twophase twophase_stream
+ spill slot truncate stream stats twophase twophase_stream \
+ sequence
ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \
oldest_xmin snapshot_transfer subxact_without_top concurrent_stream \
twophase_snapshot
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
new file mode 100644
index 00000000000..c3c2a18a35b
--- /dev/null
+++ b/contrib/test_decoding/expected/sequence.out
@@ -0,0 +1,327 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+ ?column?
+----------
+ init
+(1 row)
+
+CREATE SEQUENCE test_sequence;
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4
+(1 row)
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 14
+(1 row)
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4000
+(1 row)
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, true);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, false);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3500
+(1 row)
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+--------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:1 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value:33 log_cnt:0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:4 log_cnt:0 is_called:1
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value:334 log_cnt:0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:14 log_cnt:32 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:14 log_cnt:0 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:4000 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value:4320 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3500 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3500 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3500 log_cnt:0 is_called:0
+ sequence public.test_sequence: transactional:0 last_value:3820 log_cnt:0 is_called:1
+(24 rows)
+
+DROP SEQUENCE test_sequence;
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3002
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3003
+(1 row)
+
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 6001
+(1 row)
+
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3003 | 30 | t
+(1 row)
+
+SELECT nextval('test_table_a_seq');
+ nextval
+---------
+ 3004
+(1 row)
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+-----------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 last_value:1 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_table_a_seq: transactional:0 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value:3000 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value:3033 log_cnt:0 is_called:1
+ BEGIN
+ COMMIT
+(8 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+---------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 last_value:1 log_cnt:0 is_called:0
+ sequence public.test_table_a_seq: transactional:1 last_value:33 log_cnt:0 is_called:1
+ table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ COMMIT
+(6 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ setval
+--------
+ 5000
+(1 row)
+
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3001 | 32 | t
+(1 row)
+
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+--------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:1 log_cnt:0 is_called:0
+ sequence public.test_sequence: transactional:1 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 last_value:3000 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 last_value:3033 log_cnt:0 is_called:1
+ COMMIT
+(6 rows)
+
+SELECT pg_drop_replication_slot('regression_slot');
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
new file mode 100644
index 00000000000..21c4b792224
--- /dev/null
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -0,0 +1,119 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+
+CREATE SEQUENCE test_sequence;
+
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, true);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, false);
+SELECT nextval('test_sequence');
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+DROP SEQUENCE test_sequence;
+
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+SELECT nextval('test_table_a_seq');
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index e5cd84e85e4..bc3250b8f4a 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -35,6 +35,7 @@ typedef struct
bool include_timestamp;
bool skip_empty_xacts;
bool only_local;
+ bool skip_sequences;
} TestDecodingData;
/*
@@ -76,6 +77,10 @@ static void pg_decode_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
TransactionId xid,
const char *gid);
@@ -116,6 +121,10 @@ static void pg_decode_stream_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_stream_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static void pg_decode_stream_truncate(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
int nrelations, Relation relations[],
@@ -141,6 +150,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->filter_by_origin_cb = pg_decode_filter;
cb->shutdown_cb = pg_decode_shutdown;
cb->message_cb = pg_decode_message;
+ cb->sequence_cb = pg_decode_sequence;
cb->filter_prepare_cb = pg_decode_filter_prepare;
cb->begin_prepare_cb = pg_decode_begin_prepare_txn;
cb->prepare_cb = pg_decode_prepare_txn;
@@ -153,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pg_decode_stream_commit;
cb->stream_change_cb = pg_decode_stream_change;
cb->stream_message_cb = pg_decode_stream_message;
+ cb->stream_sequence_cb = pg_decode_stream_sequence;
cb->stream_truncate_cb = pg_decode_stream_truncate;
}
@@ -175,6 +186,9 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
data->skip_empty_xacts = false;
data->only_local = false;
+ /* skip sequences by default for backwards compatibility */
+ data->skip_sequences = true;
+
ctx->output_plugin_private = data;
opt->output_type = OUTPUT_PLUGIN_TEXTUAL_OUTPUT;
@@ -265,6 +279,17 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "skip-sequences") == 0)
+ {
+
+ if (elem->arg == NULL)
+ continue; /* true by default */
+ else if (!parse_bool(strVal(elem->arg), &data->skip_sequences))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not parse value \"%s\" for parameter \"%s\"",
+ strVal(elem->arg), elem->defname)));
+ }
else
{
ereport(ERROR,
@@ -744,6 +769,28 @@ pg_decode_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d last_value:%zu log_cnt:%zu is_called:%d",
+ transactional, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
static void
pg_decode_stream_start(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn)
@@ -943,6 +990,28 @@ pg_decode_stream_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_stream_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "streaming sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d last_value:%zu log_cnt:%zu is_called:%d",
+ transactional, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* In streaming mode, we don't display the detailed information of Truncate.
* See pg_decode_stream_change.
--
2.31.1
0001-Logical-decoding-of-sequences-20211214.patchtext/x-patch; charset=UTF-8; name=0001-Logical-decoding-of-sequences-20211214.patchDownload
From 58bf0d2abb4fa45bdf67c271cbd43f6afb354342 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 24 Sep 2021 00:41:33 +0200
Subject: [PATCH 1/3] Logical decoding of sequences
This adds the infrastructure for logical decoding of sequences. Support
for built-in logical replication and test_decoding is added separately.
When decoding sequences, we differentiate between a sequences created in
a (running) transaction, and sequences created in other (already
committed) transactions. Changes for sequences created in the same
top-level transaction are treated as "transactional" i.e. just like any
other change from that transaction (and discarded in case of a
rollback). Changes for sequences created earlier are treated as not
transactional - are processed immediately, as if performed outside any
transaction (and thus not rolled back).
This mixed behavior is necessary - sequences are non-transactional (e.g.
ROLLBACK does not undo the sequence increments). But for new sequences,
we need to handle them in a transactional way, because if we ever get
some DDL support, the sequence won't exist until the transaction gets
applied. So we need to ensure the increments don't happen until the
sequence gets created.
To differentiate which sequences are "old" and which were created in a
still-running transaction, we track sequences created in running
transactions in a hash table. Each sequence is identified by it's
relfilenode, and at transaction commit we discard all sequences created
in that particular transaction. For each sequence we track the XID of
the (sub)transaction that created it, and we cleanup the sequences for
each subtransaction when it completes (commit/rollback).
We don't use the XID to check if it's the same top-level transaction.
It's enough to know it was created in an in-progress transaction, and we
know it must be the current one because otherwise it wouldn't see the
sequence object.
The main changes in this patch are:
1) ensure WAL-logging of all necessary info for sequence advances
We need to be able to associate the advance with a XID, but until now
sequence advance might have XID 0 if it was the first thing the
transaction did. So we ensure the transaction has XID.
Note: This is needed because of subxacts. A XID 0 might still have the
sequence created in a different subxact of the same top-level xact.
Then there's a "created" flag added to the xl_seq_rec, which is
necessary to differentiate WAL for created/reset sequences.
2) decoding / queueing of sequences into ReorderBuffer
This is mostly copy-paste of existing code for other decoded events.
3) tracking sequences created in in-progress transactions
We use a simple hash table, indexed by relfilenode.
---
doc/src/sgml/logicaldecoding.sgml | 62 ++-
src/backend/commands/sequence.c | 32 ++
src/backend/replication/logical/decode.c | 131 +++++-
src/backend/replication/logical/logical.c | 88 ++++
.../replication/logical/reorderbuffer.c | 400 ++++++++++++++++++
src/include/commands/sequence.h | 1 +
src/include/replication/output_plugin.h | 27 ++
src/include/replication/reorderbuffer.h | 43 +-
8 files changed, 778 insertions(+), 6 deletions(-)
diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml
index b6353c7a125..60a2b7d1ccb 100644
--- a/doc/src/sgml/logicaldecoding.sgml
+++ b/doc/src/sgml/logicaldecoding.sgml
@@ -458,6 +458,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
LogicalDecodeFilterPrepareCB filter_prepare_cb;
@@ -472,6 +473,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
@@ -481,9 +483,12 @@ typedef void (*LogicalOutputPluginInit) (struct OutputPluginCallbacks *cb);
and <function>commit_cb</function> callbacks are required,
while <function>startup_cb</function>,
<function>filter_by_origin_cb</function>, <function>truncate_cb</function>,
+ <function>sequence_cb</function>,
and <function>shutdown_cb</function> are optional.
If <function>truncate_cb</function> is not set but a
<command>TRUNCATE</command> is to be decoded, the action will be ignored.
+ Similarly, if <function>sequence_cb</function> is not set and a sequence
+ change is to be decoded, the action will be ignored.
</para>
<para>
@@ -492,7 +497,8 @@ typedef void (*LogicalOutputPluginInit) (struct OutputPluginCallbacks *cb);
<function>stream_stop_cb</function>, <function>stream_abort_cb</function>,
<function>stream_commit_cb</function>, <function>stream_change_cb</function>,
and <function>stream_prepare_cb</function>
- are required, while <function>stream_message_cb</function> and
+ are required, while <function>stream_message_cb</function>,
+ <function>stream_sequence_cb</function>, and
<function>stream_truncate_cb</function> are optional.
</para>
@@ -808,6 +814,35 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
</para>
</sect3>
+ <sect3 id="logicaldecoding-output-plugin-sequence">
+ <title>Sequence Callback</title>
+
+ <para>
+ The optional <function>sequence_cb</function> callback is called for
+ actions that update a sequence value.
+<programlisting>
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+</programlisting>
+ The <parameter>txn</parameter> parameter contains meta information about
+ the transaction the sequence change is part of. Note however that it can
+ be NULL when the sequence change is non-transactional and the XID was not
+ assigned yet in the transaction which updated the sequence value.
+ The <parameter>sequence_lsn</parameter> has WAL location of the sequence
+ update. The <parameter>transactional</parameter> says if the sequence has
+ to be replayed as part of the transaction or directly.
+
+ The <parameter>last_value</parameter>, <parameter>log_cnt</parameter> and
+ <parameter>is_called</parameter> parameters describe the sequence change.
+ </para>
+ </sect3>
+
<sect3 id="logicaldecoding-output-plugin-filter-prepare">
<title>Prepare Filter Callback</title>
@@ -1017,6 +1052,26 @@ typedef void (*LogicalDecodeStreamMessageCB) (struct LogicalDecodingContext *ctx
</para>
</sect3>
+ <sect3 id="logicaldecoding-output-plugin-stream-sequence">
+ <title>Stream Sequence Callback</title>
+ <para>
+ The optional <function>stream_sequence_cb</function> callback is called
+ for actions that change a sequence in a block of streamed changes
+ (demarcated by <function>stream_start_cb</function> and
+ <function>stream_stop_cb</function> calls).
+<programlisting>
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+</programlisting>
+ </para>
+ </sect3>
+
<sect3 id="logicaldecoding-output-plugin-stream-truncate">
<title>Stream Truncate Callback</title>
<para>
@@ -1197,8 +1252,9 @@ OutputPluginWrite(ctx, true);
in-progress transactions. There are multiple required streaming callbacks
(<function>stream_start_cb</function>, <function>stream_stop_cb</function>,
<function>stream_abort_cb</function>, <function>stream_commit_cb</function>
- and <function>stream_change_cb</function>) and two optional callbacks
- (<function>stream_message_cb</function> and <function>stream_truncate_cb</function>).
+ and <function>stream_change_cb</function>) and multiple optional callbacks
+ (<function>stream_message_cb</function>, <function>stream_sequence_cb</function>,
+ and <function>stream_truncate_cb</function>).
Also, if streaming of two-phase commands is to be supported, then additional
callbacks must be provided. (See <xref linkend="logicaldecoding-two-phase-commits"/>
for details).
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 72bfdc07a49..626b5b252f4 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -378,8 +378,13 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(rel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
START_CRIT_SECTION();
MarkBufferDirty(buf);
@@ -399,6 +404,7 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = rel->rd_node;
+ xlrec.created = true;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) tuple->t_data, tuple->t_len);
@@ -764,10 +770,28 @@ nextval_internal(Oid relid, bool check_permissions)
* It's sufficient to ensure the toplevel transaction has an xid, no need
* to assign xids subxacts, that'll already trigger an appropriate wait.
* (Have to do that here, so we're outside the critical section)
+ *
+ * We have to ensure we have a proper XID, which will be included in
+ * the XLOG record by XLogRecordAssemble. Otherwise the first nextval()
+ * in a subxact (without any preceding changes) would get XID 0, and it
+ * would then be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence. We only do that with
+ * wal_level=logical, though.
+ *
+ * XXX This might seem unnecessary, because if there's no XID the xact
+ * couldn't have done anything important yet, e.g. it could not have
+ * created a sequence. But that's incorrect, as it ignores subxacts. The
+ * current subtransaction might not have done anything yet (thus no XID),
+ * but an earlier one might have created the sequence.
*/
if (logit && RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -803,6 +827,7 @@ nextval_internal(Oid relid, bool check_permissions)
seq->log_cnt = 0;
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -977,8 +1002,13 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -999,6 +1029,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
+
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 59aed6cee6c..6965be53caa 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -42,6 +42,7 @@
#include "replication/reorderbuffer.h"
#include "replication/snapbuild.h"
#include "storage/standby.h"
+#include "commands/sequence.h"
typedef struct XLogRecordBuffer
{
@@ -74,10 +75,11 @@ static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
bool two_phase);
static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_prepare *parsed);
-
+static void DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
/* common function to decode tuples */
static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);
+static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);
/* helper functions for decoding transactions */
static inline bool FilterPrepare(LogicalDecodingContext *ctx,
@@ -158,6 +160,10 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
DecodeLogicalMsgOp(ctx, &buf);
break;
+ case RM_SEQ_ID:
+ DecodeSequence(ctx, &buf);
+ break;
+
/*
* Rmgrs irrelevant for logical decoding; they describe stuff not
* represented in logical decoding. Add new rmgrs in rmgrlist.h's
@@ -173,7 +179,6 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
case RM_HASH_ID:
case RM_GIN_ID:
case RM_GIST_ID:
- case RM_SEQ_ID:
case RM_SPGIST_ID:
case RM_BRIN_ID:
case RM_COMMIT_TS_ID:
@@ -1313,3 +1318,125 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
(txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
+
+/*
+ * Decode Sequence Tuple
+ */
+static void
+DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
+{
+ int datalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;
+
+ Assert(datalen >= 0);
+
+ tuple->tuple.t_len = datalen + SizeofHeapTupleHeader;;
+
+ ItemPointerSetInvalid(&tuple->tuple.t_self);
+
+ tuple->tuple.t_tableOid = InvalidOid;
+
+ memcpy(((char *) tuple->tuple.t_data),
+ data + sizeof(xl_seq_rec),
+ SizeofHeapTupleHeader);
+
+ memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,
+ data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,
+ datalen);
+}
+
+/*
+ * Handle sequence decode
+ *
+ * Decoding sequences is a bit tricky, because while most sequence actions
+ * are non-transactional (not subject to rollback), some need to be handled
+ * as transactional.
+ *
+ * By default, a sequence increment is non-transactional - we must not queue
+ * it in a transaction as other changes, because the transaction might get
+ * rolled back and we'd discard the increment. The downstream would not be
+ * notified about the increment, which is wrong.
+ *
+ * On the other hand, the sequence may be created in a transaction. In this
+ * case we *should* queue the change as other changes in the transaction,
+ * because we don't want to send the increments for unknown sequence to the
+ * plugin - it might get confused about which sequence it's related to etc.
+ */
+static void
+DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ SnapBuild *builder = ctx->snapshot_builder;
+ ReorderBufferTupleBuf *tuplebuf;
+ RelFileNode target_node;
+ XLogReaderState *r = buf->record;
+ char *tupledata = NULL;
+ Size tuplelen;
+ Size datalen = 0;
+ TransactionId xid = XLogRecGetXid(r);
+ uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
+ xl_seq_rec *xlrec;
+ Snapshot snapshot;
+ RepOriginId origin_id = XLogRecGetOrigin(r);
+ bool transactional;
+
+ /* only decode changes flagged with XLOG_SEQ_LOG */
+ if (info != XLOG_SEQ_LOG)
+ elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
+
+ /*
+ * If we don't have snapshot or we are just fast-forwarding, there is no
+ * point in decoding messages.
+ */
+ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||
+ ctx->fast_forward)
+ return;
+
+ /* only interested in our database */
+ XLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);
+ if (target_node.dbNode != ctx->slot->data.database)
+ return;
+
+ /* output plugin doesn't look for this origin, no need to queue */
+ if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
+ return;
+
+ tupledata = XLogRecGetData(r);
+ datalen = XLogRecGetDataLen(r);
+ tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);
+
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_seq_rec *) XLogRecGetData(r);
+
+ /* XXX how could we have sequence change without data? */
+ if(!datalen || !tupledata)
+ return;
+
+ tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
+ DecodeSeqTuple(tupledata, datalen, tuplebuf);
+
+ /*
+ * Should we handle the sequence increment as transactional or not?
+ *
+ * If the sequence was created in a still-running transaction, treat
+ * it as transactional and queue the increments. Otherwise it needs
+ * to be treated as non-transactional, in which case we send it to
+ * the plugin right away.
+ */
+ transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
+ target_node,
+ xlrec->created);
+
+ /* Skip the change if already processed (per the snapshot). */
+ if (transactional &&
+ !SnapBuildProcessChange(builder, xid, buf->origptr))
+ return;
+ else if (!transactional &&
+ (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
+ SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ return;
+
+ /* Queue the increment (or send immediately if not transactional). */
+ snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
+ ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
+ origin_id, target_node, transactional,
+ xlrec->created, tuplebuf);
+}
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index 10cbdea124c..aab5e1226d4 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -73,6 +73,10 @@ static void truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
static void message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
/* streaming callbacks */
static void stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
@@ -90,6 +94,10 @@ static void stream_change_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn
static void stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static void stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[], ReorderBufferChange *change);
@@ -218,6 +226,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->apply_truncate = truncate_cb_wrapper;
ctx->reorder->commit = commit_cb_wrapper;
ctx->reorder->message = message_cb_wrapper;
+ ctx->reorder->sequence = sequence_cb_wrapper;
/*
* To support streaming, we require start/stop/abort/commit/change
@@ -234,6 +243,7 @@ StartupDecodingContext(List *output_plugin_options,
(ctx->callbacks.stream_commit_cb != NULL) ||
(ctx->callbacks.stream_change_cb != NULL) ||
(ctx->callbacks.stream_message_cb != NULL) ||
+ (ctx->callbacks.stream_sequence_cb != NULL) ||
(ctx->callbacks.stream_truncate_cb != NULL);
/*
@@ -251,6 +261,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->stream_commit = stream_commit_cb_wrapper;
ctx->reorder->stream_change = stream_change_cb_wrapper;
ctx->reorder->stream_message = stream_message_cb_wrapper;
+ ctx->reorder->stream_sequence = stream_sequence_cb_wrapper;
ctx->reorder->stream_truncate = stream_truncate_cb_wrapper;
@@ -1203,6 +1214,42 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ if (ctx->callbacks.sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr first_lsn)
@@ -1508,6 +1555,47 @@ stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ /* We're only supposed to call this when streaming is supported. */
+ Assert(ctx->streaming);
+
+ /* this callback is optional */
+ if (ctx->callbacks.stream_sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "stream_sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[],
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 7aa5647a2c6..6b6d0d9b955 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -77,6 +77,35 @@
* a bit more memory to the oldest subtransactions, because it's likely
* they are the source for the next sequence of changes.
*
+ * When decoding sequences, we differentiate between a sequences created
+ * in a (running) transaction, and sequences created in other (already
+ * committed) transactions. Changes for sequences created in the same
+ * top-level transaction are treated as "transactional" i.e. just like
+ * any other change from that transaction (and discarded in case of a
+ * rollback). Changes for sequences created earlier are treated as not
+ * transactional - are processed immediately, as if performed outside
+ * any transaction (and thus not rolled back).
+ *
+ * This mixed behavior is necessary - sequences are non-transactional
+ * (e.g. ROLLBACK does not undo the sequence increments). But for new
+ * sequences, we need to handle them in a transactional way, because if
+ * we ever get some DDL support, the sequence won't exist until the
+ * transaction gets applied. So we need to ensure the increments don't
+ * happen until the sequence gets created.
+ *
+ * To differentiate which sequences are "old" and which were created
+ * in a still-running transaction, we track sequences created in running
+ * transactions in a hash table. Each sequence is identified by it's
+ * relfilenode, and at transaction commit we discard all sequences
+ * created in that particular transaction. For each sequence we track
+ * the XID of the (sub)transaction that created it, and we cleanup the
+ * sequences for each subtransaction when it completes (commit/rollback).
+ *
+ * We don't use the XID to check if it's the same top-level transaction.
+ * It's enough to know it was created in an in-progress transaction,
+ * and we know it must be the current one because otherwise it wouldn't
+ * see the sequence object.
+ *
* -------------------------------------------------------------------------
*/
#include "postgres.h"
@@ -91,6 +120,7 @@
#include "access/xact.h"
#include "access/xlog_internal.h"
#include "catalog/catalog.h"
+#include "commands/sequence.h"
#include "lib/binaryheap.h"
#include "miscadmin.h"
#include "pgstat.h"
@@ -116,6 +146,13 @@ typedef struct ReorderBufferTXNByIdEnt
ReorderBufferTXN *txn;
} ReorderBufferTXNByIdEnt;
+/* entry for hash table we use to track sequences created in running xacts */
+typedef struct ReorderBufferSequenceEnt
+{
+ RelFileNode rnode;
+ TransactionId xid;
+} ReorderBufferSequenceEnt;
+
/* data structures for (relfilenode, ctid) => (cmin, cmax) mapping */
typedef struct ReorderBufferTupleCidKey
{
@@ -339,6 +376,14 @@ ReorderBufferAllocate(void)
buffer->by_txn = hash_create("ReorderBufferByXid", 1000, &hash_ctl,
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+ /* hash table of sequences, mapping relfilenode to XID of transaction */
+ hash_ctl.keysize = sizeof(RelFileNode);
+ hash_ctl.entrysize = sizeof(ReorderBufferSequenceEnt);
+ hash_ctl.hcxt = buffer->context;
+
+ buffer->sequences = hash_create("ReorderBufferSequenceHash", 1000, &hash_ctl,
+ HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+
buffer->by_txn_last_xid = InvalidTransactionId;
buffer->by_txn_last_txn = NULL;
@@ -525,6 +570,13 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
change->data.truncate.relids = NULL;
}
break;
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
+ change->data.sequence.tuple = NULL;
+ }
+ break;
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
@@ -859,6 +911,230 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
}
}
+/*
+ * Treat the sequence increment as transactional?
+ *
+ * The hash table tracks all sequences created in in-progress transactions,
+ * so we simply do a lookup (the sequence is identified by relfilende). If
+ * we find a match, the increment should be handled as transactional.
+ */
+bool
+ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created)
+{
+ bool found = false;
+
+ if (created)
+ return true;
+
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND,
+ &found);
+
+ return found;
+}
+
+/*
+ * Cleanup sequences created in in-progress transactions.
+ *
+ * There's no way to search by XID, so we simply do a seqscan of all
+ * the entries in the hash table. Hopefully there are only a couple
+ * entries in most cases - people generally don't create many new
+ * sequences over and over.
+ */
+static void
+ReorderBufferSequenceCleanup(ReorderBuffer *rb, TransactionId xid)
+{
+ HASH_SEQ_STATUS scan_status;
+ ReorderBufferSequenceEnt *ent;
+
+ hash_seq_init(&scan_status, rb->sequences);
+ while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
+ {
+ /* skip sequences not from this transaction */
+ if (ent->xid != xid)
+ continue;
+
+ (void) hash_search(rb->sequences,
+ (void *) &(ent->rnode),
+ HASH_REMOVE, NULL);
+ }
+}
+
+/*
+ * A transactional sequence increment is queued to be processed upon commit
+ * and a non-transactional increment gets processed immediately.
+ *
+ * A sequence update may be both transactional and non-transactional. When
+ * created in a running transaction, treat it as transactional and queue
+ * the change in it. Otherwise treat it as non-transactional, so that we
+ * don't forget the increment in case of a rollback.
+ */
+void
+ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf)
+{
+ /*
+ * Change needs to be handled as transactional, because the sequence was
+ * created in a transaction that is still running. In that case all the
+ * changes need to be queued in that transaction, we must not send them
+ * to the downstream until the transaction commits.
+ *
+ * There's a bit of a trouble with subtransactions - we can't queue it
+ * into the subxact, because it might be rolled back and we'd lose the
+ * increment. We need to queue it into the same (sub)xact that created
+ * the sequence, which is why we track the XID in the hash table.
+ */
+ if (transactional)
+ {
+ MemoryContext oldcontext;
+ ReorderBufferChange *change;
+
+ /* lookup sequence by relfilenode */
+ ReorderBufferSequenceEnt *ent;
+ bool found;
+
+ /* transactional changes require a transaction */
+ Assert(xid != InvalidTransactionId);
+
+ /* search the lookup table (we ignore the return value, found is enough) */
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ created ? HASH_ENTER : HASH_FIND,
+ &found);
+
+ /*
+ * If this is the "create" increment, we must not have found any
+ * pre-existing entry in the hash table (i.e. there must not be
+ * any conflicting sequence).
+ */
+ Assert(!(created && found));
+
+ /* But we must have either created or found an existing entry. */
+ Assert(created || found);
+
+ /*
+ * When creating the sequence, remember the XID of the transaction
+ * that created id.
+ */
+ if (created)
+ ent->xid = xid;
+
+ /* XXX Maybe check that we're still in the same top-level xact? */
+
+ /* OK, allocate and queue the change */
+ oldcontext = MemoryContextSwitchTo(rb->context);
+
+ change = ReorderBufferGetChange(rb);
+
+ change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
+ change->origin_id = origin_id;
+
+ memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
+
+ change->data.sequence.tuple = tuplebuf;
+
+ /* add it to the same subxact that created the sequence */
+ ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+ else
+ {
+ /*
+ * This increment is for a sequence that was not created in any
+ * running transaction, so we treat it as non-transactional and
+ * just send it to the output plugin directly.
+ */
+ ReorderBufferTXN *txn = NULL;
+ volatile Snapshot snapshot_now = snapshot;
+ bool using_subtxn;
+
+#ifdef USE_ASSERT_CHECKING
+ /* All "creates" have to be handled as transactional. */
+ Assert(!created);
+
+ /* Make sure the sequence is not in the hash table. */
+ {
+ bool found;
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND, &found);
+ Assert(!found);
+ }
+#endif
+
+ if (xid != InvalidTransactionId)
+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
+
+ /* setup snapshot to allow catalog access */
+ SetupHistoricSnapshot(snapshot_now, NULL);
+
+ /*
+ * Decoding needs access to syscaches et al., which in turn use
+ * heavyweight locks and such. Thus we need to have enough state around to
+ * keep track of those. The easiest way is to simply use a transaction
+ * internally. That also allows us to easily enforce that nothing writes
+ * to the database by checking for xid assignments.
+ *
+ * When we're called via the SQL SRF there's already a transaction
+ * started, so start an explicit subtransaction there.
+ */
+ using_subtxn = IsTransactionOrTransactionBlock();
+
+ PG_TRY();
+ {
+ Relation relation;
+ HeapTuple tuple;
+ Form_pg_sequence_data seq;
+ Oid reloid;
+
+ if (using_subtxn)
+ BeginInternalSubTransaction("sequence");
+ else
+ StartTransactionCommand();
+
+ reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(rnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+ tuple = &tuplebuf->tuple;
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+
+ rb->sequence(rb, txn, lsn, relation, transactional,
+ seq->last_value, seq->log_cnt, seq->is_called);
+
+ RelationClose(relation);
+
+ TeardownHistoricSnapshot(false);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+ }
+ PG_CATCH();
+ {
+ TeardownHistoricSnapshot(true);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+ }
+}
+
/*
* AssertTXNLsnOrder
* Verify LSN ordering of transaction lists in the reorderbuffer
@@ -1535,6 +1811,9 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
&found);
Assert(found);
+ /* Remove sequences created in this transaction (if any). */
+ ReorderBufferSequenceCleanup(rb, txn->xid);
+
/* remove entries spilled to disk */
if (rbtxn_is_serialized(txn))
ReorderBufferRestoreCleanup(rb, txn);
@@ -1950,6 +2229,29 @@ ReorderBufferApplyMessage(ReorderBuffer *rb, ReorderBufferTXN *txn,
change->data.msg.message);
}
+/*
+ * Helper function for ReorderBufferProcessTXN for applying sequences.
+ */
+static inline void
+ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ Relation relation, ReorderBufferChange *change,
+ bool streaming)
+{
+ HeapTuple tuple;
+ Form_pg_sequence_data seq;
+
+ tuple = &change->data.sequence.tuple->tuple;
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+
+ /* Only ever called from ReorderBufferApplySequence, so transational. */
+ if (streaming)
+ rb->stream_sequence(rb, txn, change->lsn, relation, true,
+ seq->last_value, seq->log_cnt, seq->is_called);
+ else
+ rb->sequence(rb, txn, change->lsn, relation, true,
+ seq->last_value, seq->log_cnt, seq->is_called);
+}
+
/*
* Function to store the command id and snapshot at the end of the current
* stream so that we can reuse the same while sending the next stream.
@@ -2392,6 +2694,31 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
elog(ERROR, "tuplecid value in changequeue");
break;
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ Assert(snapshot_now);
+
+ reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
+ change->data.sequence.relnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+
+ if (!RelationIsValid(relation))
+ elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
+ reloid,
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ if (RelationIsLogicallyLogged(relation))
+ ReorderBufferApplySequence(rb, txn, relation, change, streaming);
+
+ RelationClose(relation);
+ break;
}
}
@@ -3776,6 +4103,39 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ char *data;
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
+ /* make sure we have enough space */
+ ReorderBufferSerializeReserve(rb, sz);
+
+ data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
+ /* might have been reallocated above */
+ ondisk = (ReorderBufferDiskChange *) rb->outbuf;
+
+ if (len)
+ {
+ memcpy(data, &tup->tuple, sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ memcpy(data, tup->tuple.t_data, len);
+ data += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4040,6 +4400,22 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4341,6 +4717,30 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ uint32 tuplelen = ((HeapTuple) data)->t_len;
+
+ change->data.sequence.tuple =
+ ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
+
+ /* restore ->tuple */
+ memcpy(&change->data.sequence.tuple->tuple, data,
+ sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ /* reset t_data pointer into the new tuplebuf */
+ change->data.sequence.tuple->tuple.t_data =
+ ReorderBufferTupleBufData(change->data.sequence.tuple);
+
+ /* restore tuple data itself */
+ memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
+ data += tuplelen;
+ }
+ break;
+
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 40544dd4c70..5919fb90ee1 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -48,6 +48,7 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
typedef struct xl_seq_rec
{
RelFileNode node;
+ bool created; /* is this a CREATE SEQUENCE */
/* SEQUENCE TUPLE DATA FOLLOWS AT THE END */
} xl_seq_rec;
diff --git a/src/include/replication/output_plugin.h b/src/include/replication/output_plugin.h
index 810495ed0e4..43b8a63ee4f 100644
--- a/src/include/replication/output_plugin.h
+++ b/src/include/replication/output_plugin.h
@@ -88,6 +88,18 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
Size message_size,
const char *message);
+/*
+ * Called for the generic logical decoding sequences.
+ */
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Filter changes by origin.
*/
@@ -199,6 +211,19 @@ typedef void (*LogicalDecodeStreamMessageCB) (struct LogicalDecodingContext *ctx
Size message_size,
const char *message);
+/*
+ * Called for the streaming generic logical decoding sequences from in-progress
+ * transactions.
+ */
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Callback for streaming truncates from in-progress transactions.
*/
@@ -219,6 +244,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
@@ -237,6 +263,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index 5b40ff75f79..ab469fb2109 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -64,7 +64,8 @@ enum ReorderBufferChangeType
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT,
- REORDER_BUFFER_CHANGE_TRUNCATE
+ REORDER_BUFFER_CHANGE_TRUNCATE,
+ REORDER_BUFFER_CHANGE_SEQUENCE
};
/* forward declaration */
@@ -158,6 +159,13 @@ typedef struct ReorderBufferChange
uint32 ninvalidations; /* Number of messages */
SharedInvalidationMessage *invalidations; /* invalidation message */
} inval;
+
+ /* Context data for Sequence changes */
+ struct
+ {
+ RelFileNode relnode;
+ ReorderBufferTupleBuf *tuple;
+ } sequence;
} data;
/*
@@ -430,6 +438,15 @@ typedef void (*ReorderBufferMessageCB) (ReorderBuffer *rb,
const char *prefix, Size sz,
const char *message);
+/* sequence callback signature */
+typedef void (*ReorderBufferSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* begin prepare callback signature */
typedef void (*ReorderBufferBeginPrepareCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn);
@@ -496,6 +513,15 @@ typedef void (*ReorderBufferStreamMessageCB) (
const char *prefix, Size sz,
const char *message);
+/* stream sequence callback signature */
+typedef void (*ReorderBufferStreamSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* stream truncate callback signature */
typedef void (*ReorderBufferStreamTruncateCB) (
ReorderBuffer *rb,
@@ -511,6 +537,12 @@ struct ReorderBuffer
*/
HTAB *by_txn;
+ /*
+ * relfilenode => XID lookup table for sequences created in a transaction
+ * (also includes altered sequences, which assigns new relfilenode)
+ */
+ HTAB *sequences;
+
/*
* Transactions that could be a toplevel xact, ordered by LSN of the first
* record bearing that xid.
@@ -541,6 +573,7 @@ struct ReorderBuffer
ReorderBufferApplyTruncateCB apply_truncate;
ReorderBufferCommitCB commit;
ReorderBufferMessageCB message;
+ ReorderBufferSequenceCB sequence;
/*
* Callbacks to be called when streaming a transaction at prepare time.
@@ -560,6 +593,7 @@ struct ReorderBuffer
ReorderBufferStreamCommitCB stream_commit;
ReorderBufferStreamChangeCB stream_change;
ReorderBufferStreamMessageCB stream_message;
+ ReorderBufferStreamSequenceCB stream_sequence;
ReorderBufferStreamTruncateCB stream_truncate;
/*
@@ -635,6 +669,10 @@ void ReorderBufferQueueChange(ReorderBuffer *, TransactionId,
void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapshot, XLogRecPtr lsn,
bool transactional, const char *prefix,
Size message_size, const char *message);
+void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
@@ -682,4 +720,7 @@ void ReorderBufferSetRestartPoint(ReorderBuffer *, XLogRecPtr ptr);
void StartupReorderBuffer(void);
+bool ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created);
+
#endif
--
2.31.1
On Tue, Dec 14, 2021 at 7:02 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
Hi,
here's an updated version of the patches, dealing with almost all of the
issues (at least in the 0001 and 0002 parts). The main changes:1) I've removed the 'created' flag from fill_seq_with_data, as
discussed. I don't think it's needed by any of the parts (not even 0003,
AFAICS). We still need it in xl_seq_rec, though.2) GetCurrentTransactionId() added to sequence.c are called only with
wal_level=logical, to minimize the overhead.There's still one remaining problem, that I already explained in [1].
The problem is that with this:BEGIN;
SELECT nextval('s') FROM generate_series(1,100);
ROLLBACK;The root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write,
which is updated by XLogFlush() - but only in RecordTransactionCommit.
Which makes sense, because only the committed stuff is "visible".But the non-transactional behavior of sequence decoding disagrees with
this, because now some of the changes from aborted transactions may be
replicated. Which means the wait_for_catchup() ends up not waiting for
the sequence change to be replicated. This is an issue for tests in
patch 0003, at least.My concern is this actually affects other places waiting for things
getting replicated :-/
By any chance, will this impact synchronous replication as well which
waits for commits to be replicated?
How is this patch dealing with prepared transaction case where at
prepare we will send transactional changes and then later if rollback
prepared happens then the publisher will rollback changes related to
new relfilenode but subscriber would have already replayed the updated
seqval change which won't be rolled back?
--
With Regards,
Amit Kapila.
On 14.12.21 02:31, Tomas Vondra wrote:
There's still one remaining problem, that I already explained in [1].
The problem is that with this:BEGIN;
SELECT nextval('s') FROM generate_series(1,100);
ROLLBACK;The root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write,
which is updated by XLogFlush() - but only in RecordTransactionCommit.
Which makes sense, because only the committed stuff is "visible".But the non-transactional behavior of sequence decoding disagrees with
this, because now some of the changes from aborted transactions may be
replicated. Which means the wait_for_catchup() ends up not waiting for
the sequence change to be replicated.
I can't think of a reason why this might be a problem.
On 12/15/21 14:20, Amit Kapila wrote:
On Tue, Dec 14, 2021 at 7:02 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:Hi,
here's an updated version of the patches, dealing with almost all of the
issues (at least in the 0001 and 0002 parts). The main changes:1) I've removed the 'created' flag from fill_seq_with_data, as
discussed. I don't think it's needed by any of the parts (not even 0003,
AFAICS). We still need it in xl_seq_rec, though.2) GetCurrentTransactionId() added to sequence.c are called only with
wal_level=logical, to minimize the overhead.There's still one remaining problem, that I already explained in [1].
The problem is that with this:BEGIN;
SELECT nextval('s') FROM generate_series(1,100);
ROLLBACK;The root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write,
which is updated by XLogFlush() - but only in RecordTransactionCommit.
Which makes sense, because only the committed stuff is "visible".But the non-transactional behavior of sequence decoding disagrees with
this, because now some of the changes from aborted transactions may be
replicated. Which means the wait_for_catchup() ends up not waiting for
the sequence change to be replicated. This is an issue for tests in
patch 0003, at least.My concern is this actually affects other places waiting for things
getting replicated :-/By any chance, will this impact synchronous replication as well which
waits for commits to be replicated?
Physical or logical replication? Physical is certainly not replicated.
For logical replication, it's more complicated.
How is this patch dealing with prepared transaction case where at
prepare we will send transactional changes and then later if rollback
prepared happens then the publisher will rollback changes related to
new relfilenode but subscriber would have already replayed the updated
seqval change which won't be rolled back?
I'm not sure what exact scenario you are describing, but in general we
don't rollback sequence changes anyway, so this should not cause any
divergence between the publisher and subscriber.
Or are you talking about something like ALTER SEQUENCE? I think that
should trigger the transactional behavior for the new relfilenode, so
the subscriber won't see that changes.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 12/15/21 14:51, Tomas Vondra wrote:
On 12/15/21 14:20, Amit Kapila wrote:
On Tue, Dec 14, 2021 at 7:02 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:Hi,
here's an updated version of the patches, dealing with almost all of the
issues (at least in the 0001 and 0002 parts). The main changes:1) I've removed the 'created' flag from fill_seq_with_data, as
discussed. I don't think it's needed by any of the parts (not even 0003,
AFAICS). We still need it in xl_seq_rec, though.2) GetCurrentTransactionId() added to sequence.c are called only with
wal_level=logical, to minimize the overhead.There's still one remaining problem, that I already explained in [1].
The problem is that with this:BEGIN;
SELECT nextval('s') FROM generate_series(1,100);
ROLLBACK;The root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write,
which is updated by XLogFlush() - but only in RecordTransactionCommit.
Which makes sense, because only the committed stuff is "visible".But the non-transactional behavior of sequence decoding disagrees with
this, because now some of the changes from aborted transactions may be
replicated. Which means the wait_for_catchup() ends up not waiting for
the sequence change to be replicated. This is an issue for tests in
patch 0003, at least.My concern is this actually affects other places waiting for things
getting replicated :-/By any chance, will this impact synchronous replication as well which
waits for commits to be replicated?Physical or logical replication? Physical is certainly not replicated.
For logical replication, it's more complicated.
Apologies, sent too early ... I think it's more complicated for logical
sync replication, because of a scenario like this:
BEGIN;
SELECT nextval('s') FROM generate_series(1,100); <-- writes WAL
ROLLBACK;
SELECT nextval('s');
The first transaction advances the sequence enough to generate a WAL,
which we do every 32 values. But it's rolled back, so it does not update
LogwrtResult.Write, because that happens only at commit.
And then the nextval() generates a value from the sequence without
generating WAL, so it doesn't update the LSN either (IIRC). That'd mean
a sync replication may not wait for this change to reach the subscriber.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Looking at 0003,
On 2021-Dec-14, Tomas Vondra wrote:
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml index bb4ef5e5e22..4d166ad3f9c 100644 --- a/doc/src/sgml/ref/alter_publication.sgml +++ b/doc/src/sgml/ref/alter_publication.sgml @@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ... ] + SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ] ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ] + ALL SEQUENCE IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
Note that this says ALL SEQUENCE; I think it should be ALL SEQUENCES.
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y index 3d4dd43e47b..f037c17985b 100644 --- a/src/backend/parser/gram.y +++ b/src/backend/parser/gram.y @@ -9762,6 +9762,26 @@ PublicationObjSpec:
...
+ | ALL SEQUENCE IN_P SCHEMA ColId + { + $$ = makeNode(PublicationObjSpec); + $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE_IN_SCHEMA; + $$->name = $5; + $$->location = @5; + } + | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA + { + $$ = makeNode(PublicationObjSpec); + $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE_IN_CUR_SCHEMA; + $$->location = @5; + }
And here you have ALL SEQUENCE in one spot and ALL SEQUENCES in the
other.
BTW I think these enum values should use the plural too,
PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA (not SEQUENCE). I suppose you
copied from PUBLICATIONOBJ_TABLE_IN_CUR_SCHEMA, but that too seems to be
a mistake: should be PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA.
@@ -10097,6 +10117,12 @@ UnlistenStmt:
}
;+/* + * FIXME + * + * opt_publication_for_sequences and publication_for_sequences should be + * copies for sequences + */
Not sure if this FIXME is relevant or should just be removed.
@@ -10105,6 +10131,12 @@ UnlistenStmt: * BEGIN / COMMIT / ROLLBACK * (also older versions END / ABORT) * + * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2] + * + * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2] + * + * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2] + *
This comment addition seems misplaced?
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c index 2f412ca3db3..e30bf7b1b55 100644 --- a/src/bin/psql/tab-complete.c +++ b/src/bin/psql/tab-complete.c @@ -1647,13 +1647,13 @@ psql_completion(const char *text, int start, int end) COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET"); /* ALTER PUBLICATION <name> ADD */ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD")) - COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE"); + COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE"); /* ALTER PUBLICATION <name> DROP */ else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP")) - COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE"); + COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE"); /* ALTER PUBLICATION <name> SET */ else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET")) - COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE"); + COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
I think you should also add "ALL SEQUENCES IN SCHEMA" to these lists.
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
... and perhaps make this "ALL", "TABLES|SEQUENCES", "IN", "SCHEMA".
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
"Nunca confiaré en un traidor. Ni siquiera si el traidor lo he creado yo"
(Barón Vladimir Harkonnen)
On 12/15/21 17:42, Alvaro Herrera wrote:
Looking at 0003,
On 2021-Dec-14, Tomas Vondra wrote:
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml index bb4ef5e5e22..4d166ad3f9c 100644 --- a/doc/src/sgml/ref/alter_publication.sgml +++ b/doc/src/sgml/ref/alter_publication.sgml @@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r <phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ... ] + SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ] ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ] + ALL SEQUENCE IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]Note that this says ALL SEQUENCE; I think it should be ALL SEQUENCES.
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y index 3d4dd43e47b..f037c17985b 100644 --- a/src/backend/parser/gram.y +++ b/src/backend/parser/gram.y @@ -9762,6 +9762,26 @@ PublicationObjSpec:...
+ | ALL SEQUENCE IN_P SCHEMA ColId + { + $$ = makeNode(PublicationObjSpec); + $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE_IN_SCHEMA; + $$->name = $5; + $$->location = @5; + } + | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA + { + $$ = makeNode(PublicationObjSpec); + $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE_IN_CUR_SCHEMA; + $$->location = @5; + }And here you have ALL SEQUENCE in one spot and ALL SEQUENCES in the
other.BTW I think these enum values should use the plural too,
PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA (not SEQUENCE). I suppose you
copied from PUBLICATIONOBJ_TABLE_IN_CUR_SCHEMA, but that too seems to be
a mistake: should be PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA.@@ -10097,6 +10117,12 @@ UnlistenStmt:
}
;+/* + * FIXME + * + * opt_publication_for_sequences and publication_for_sequences should be + * copies for sequences + */Not sure if this FIXME is relevant or should just be removed.
@@ -10105,6 +10131,12 @@ UnlistenStmt: * BEGIN / COMMIT / ROLLBACK * (also older versions END / ABORT) * + * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2] + * + * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2] + * + * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2] + *This comment addition seems misplaced?
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c index 2f412ca3db3..e30bf7b1b55 100644 --- a/src/bin/psql/tab-complete.c +++ b/src/bin/psql/tab-complete.c @@ -1647,13 +1647,13 @@ psql_completion(const char *text, int start, int end) COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET"); /* ALTER PUBLICATION <name> ADD */ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD")) - COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE"); + COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE"); /* ALTER PUBLICATION <name> DROP */ else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP")) - COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE"); + COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE"); /* ALTER PUBLICATION <name> SET */ else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET")) - COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE"); + COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");I think you should also add "ALL SEQUENCES IN SCHEMA" to these lists.
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
... and perhaps make this "ALL", "TABLES|SEQUENCES", "IN", "SCHEMA".
Thanks for the review. I'm aware 0003 is still incomplete and subject to
change - it's certainly not meant for commit yet. The current 0003 patch
is sufficient for testing the infrastructure, but we need to figure out
how to make it easier to use, what to do with implicit sequences and
similar things. Peter had some ideas in [1]/messages/by-id/359bf6d0-413d-292a-4305-e99eeafead39@enterprisedb.com.
[1]: /messages/by-id/359bf6d0-413d-292a-4305-e99eeafead39@enterprisedb.com
/messages/by-id/359bf6d0-413d-292a-4305-e99eeafead39@enterprisedb.com
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Wed, Dec 15, 2021 at 7:21 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
On 12/15/21 14:20, Amit Kapila wrote:
On Tue, Dec 14, 2021 at 7:02 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:Hi,
here's an updated version of the patches, dealing with almost all of the
issues (at least in the 0001 and 0002 parts). The main changes:1) I've removed the 'created' flag from fill_seq_with_data, as
discussed. I don't think it's needed by any of the parts (not even 0003,
AFAICS). We still need it in xl_seq_rec, though.2) GetCurrentTransactionId() added to sequence.c are called only with
wal_level=logical, to minimize the overhead.There's still one remaining problem, that I already explained in [1].
The problem is that with this:BEGIN;
SELECT nextval('s') FROM generate_series(1,100);
ROLLBACK;The root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write,
which is updated by XLogFlush() - but only in RecordTransactionCommit.
Which makes sense, because only the committed stuff is "visible".But the non-transactional behavior of sequence decoding disagrees with
this, because now some of the changes from aborted transactions may be
replicated. Which means the wait_for_catchup() ends up not waiting for
the sequence change to be replicated. This is an issue for tests in
patch 0003, at least.My concern is this actually affects other places waiting for things
getting replicated :-/By any chance, will this impact synchronous replication as well which
waits for commits to be replicated?Physical or logical replication?
logical replication.
Physical is certainly not replicated.
For logical replication, it's more complicated.
How is this patch dealing with prepared transaction case where at
prepare we will send transactional changes and then later if rollback
prepared happens then the publisher will rollback changes related to
new relfilenode but subscriber would have already replayed the updated
seqval change which won't be rolled back?I'm not sure what exact scenario you are describing, but in general we
don't rollback sequence changes anyway, so this should not cause any
divergence between the publisher and subscriber.Or are you talking about something like ALTER SEQUENCE? I think that
should trigger the transactional behavior for the new relfilenode, so
the subscriber won't see that changes.
Yeah, something like Alter Sequence and nextval combination. I see
that it will be handled because of the transactional behavior for the
new relfilenode as for applying each sequence change, a new
relfilenode is created. I think on apply side, the patch always
creates a new relfilenode irrespective of whether the sequence message
is transactional or not. Do we need to do that for the
non-transactional messages as well?
--
With Regards,
Amit Kapila.
On 12/16/21 12:59, Amit Kapila wrote:
On Wed, Dec 15, 2021 at 7:21 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 12/15/21 14:20, Amit Kapila wrote:
On Tue, Dec 14, 2021 at 7:02 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:Hi,
here's an updated version of the patches, dealing with almost all of the
issues (at least in the 0001 and 0002 parts). The main changes:1) I've removed the 'created' flag from fill_seq_with_data, as
discussed. I don't think it's needed by any of the parts (not even 0003,
AFAICS). We still need it in xl_seq_rec, though.2) GetCurrentTransactionId() added to sequence.c are called only with
wal_level=logical, to minimize the overhead.There's still one remaining problem, that I already explained in [1].
The problem is that with this:BEGIN;
SELECT nextval('s') FROM generate_series(1,100);
ROLLBACK;The root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write,
which is updated by XLogFlush() - but only in RecordTransactionCommit.
Which makes sense, because only the committed stuff is "visible".But the non-transactional behavior of sequence decoding disagrees with
this, because now some of the changes from aborted transactions may be
replicated. Which means the wait_for_catchup() ends up not waiting for
the sequence change to be replicated. This is an issue for tests in
patch 0003, at least.My concern is this actually affects other places waiting for things
getting replicated :-/By any chance, will this impact synchronous replication as well which
waits for commits to be replicated?Physical or logical replication?
logical replication.
Physical is certainly not replicated.
For logical replication, it's more complicated.
How is this patch dealing with prepared transaction case where at
prepare we will send transactional changes and then later if rollback
prepared happens then the publisher will rollback changes related to
new relfilenode but subscriber would have already replayed the updated
seqval change which won't be rolled back?I'm not sure what exact scenario you are describing, but in general we
don't rollback sequence changes anyway, so this should not cause any
divergence between the publisher and subscriber.Or are you talking about something like ALTER SEQUENCE? I think that
should trigger the transactional behavior for the new relfilenode, so
the subscriber won't see that changes.Yeah, something like Alter Sequence and nextval combination. I see
that it will be handled because of the transactional behavior for the
new relfilenode as for applying each sequence change, a new
relfilenode is created.
Right.
I think on apply side, the patch always creates a new relfilenode
irrespective of whether the sequence message is transactional or not.
Do we need to do that for the non-transactional messages as well?
Good question. I don't think that's necessary, I'll see if we can simply
update the tuple (mostly just like redo).
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 12/15/21 14:51, Tomas Vondra wrote:
On 12/15/21 14:20, Amit Kapila wrote:
On Tue, Dec 14, 2021 at 7:02 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:Hi,
here's an updated version of the patches, dealing with almost all of the
issues (at least in the 0001 and 0002 parts). The main changes:1) I've removed the 'created' flag from fill_seq_with_data, as
discussed. I don't think it's needed by any of the parts (not even 0003,
AFAICS). We still need it in xl_seq_rec, though.2) GetCurrentTransactionId() added to sequence.c are called only with
wal_level=logical, to minimize the overhead.There's still one remaining problem, that I already explained in [1].
The problem is that with this:BEGIN;
SELECT nextval('s') FROM generate_series(1,100);
ROLLBACK;The root cause is that pg_current_wal_lsn() uses the LogwrtResult.Write,
which is updated by XLogFlush() - but only in RecordTransactionCommit.
Which makes sense, because only the committed stuff is "visible".But the non-transactional behavior of sequence decoding disagrees with
this, because now some of the changes from aborted transactions may be
replicated. Which means the wait_for_catchup() ends up not waiting for
the sequence change to be replicated. This is an issue for tests in
patch 0003, at least.My concern is this actually affects other places waiting for things
getting replicated :-/By any chance, will this impact synchronous replication as well which
waits for commits to be replicated?Physical or logical replication? Physical is certainly not replicated.
Actually, I take that back. It does affect physical (sync) replication
just as well, and I think it might be considered a data loss issue. I
started a new thread to discuss that, so that it's not buried in this
thread about logical replication.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Hi,
Here's an updated version of the patch series. The primary change is
tweaking the WAL-logging of sequences modified per [1]/messages/by-id/712cad46-a9c8-1389-aef8-faf0203c9be9@enterprisedb.com. This changes
test output in test_decoding and built-in replication patches, and to
make it clearer I left the changes in separate patches.
Assuming the WAL logging changes are acceptable, that resolves the data
loss issue.
I'm wondering what to do about changes with is_called=false, i.e.
changes generated by ALTER SEQUENCE etc. The current patch does decode
them and passes them to the output plugin, but I'm starting to think
that may not be the right behavior - if we haven't generated any data
from the sequence, there's no point in replicating that, I think.
regards
[1]: /messages/by-id/712cad46-a9c8-1389-aef8-faf0203c9be9@enterprisedb.com
/messages/by-id/712cad46-a9c8-1389-aef8-faf0203c9be9@enterprisedb.com
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-WAL-log-individual-sequence-fetches-20211222.patchtext/x-patch; charset=UTF-8; name=0001-WAL-log-individual-sequence-fetches-20211222.patchDownload
From eeaa7cb36c69af048f0321e4883864ebe2542429 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Wed, 22 Dec 2021 03:18:46 +0100
Subject: [PATCH 1/6] WAL-log individual sequence fetches
---
src/backend/commands/sequence.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 72bfdc07a4..0f309d0a4e 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -52,6 +52,9 @@
* We don't want to log each fetching of a value from a sequence,
* so we pre-log a few fetches in advance. In the event of
* crash we can lose (skip over) as many values as we pre-logged.
+ *
+ * We only pre-log fetches in wal_level=minimal. For higher levels we
+ * WAL-log every individual sequence increment, as if this was 0.
*/
#define SEQ_LOG_VALS 32
@@ -666,11 +669,18 @@ nextval_internal(Oid relid, bool check_permissions)
* WAL record to be written anyway, else replay starting from the
* checkpoint would fail to advance the sequence past the logged values.
* In this case we may as well fetch extra values.
+ *
+ * We only pre-log fetches in wal_level=minimal. For higher levels we
+ * WAL-log every individual sequence increment.
*/
if (log < fetch || !seq->is_called)
{
/* forced log to satisfy local demand for values */
- fetch = log = fetch + SEQ_LOG_VALS;
+ if (XLogIsNeeded())
+ fetch = log = fetch;
+ else
+ fetch = log = fetch + SEQ_LOG_VALS;
+
logit = true;
}
else
@@ -680,7 +690,11 @@ nextval_internal(Oid relid, bool check_permissions)
if (PageGetLSN(page) <= redoptr)
{
/* last update of seq was before checkpoint */
- fetch = log = fetch + SEQ_LOG_VALS;
+ if (XLogIsNeeded())
+ fetch = log = fetch;
+ else
+ fetch = log = fetch + SEQ_LOG_VALS;
+
logit = true;
}
}
--
2.31.1
0002-Logical-decoding-of-sequences-20211222.patchtext/x-patch; charset=UTF-8; name=0002-Logical-decoding-of-sequences-20211222.patchDownload
From b5da8cbba4a10db4ff4b7cdc7ed6555bc88d6232 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 24 Sep 2021 00:41:33 +0200
Subject: [PATCH 2/6] Logical decoding of sequences
This adds the infrastructure for logical decoding of sequences. Support
for built-in logical replication and test_decoding is added separately.
When decoding sequences, we differentiate between a sequences created in
a (running) transaction, and sequences created in other (already
committed) transactions. Changes for sequences created in the same
top-level transaction are treated as "transactional" i.e. just like any
other change from that transaction (and discarded in case of a
rollback). Changes for sequences created earlier are treated as not
transactional - are processed immediately, as if performed outside any
transaction (and thus not rolled back).
This mixed behavior is necessary - sequences are non-transactional (e.g.
ROLLBACK does not undo the sequence increments). But for new sequences,
we need to handle them in a transactional way, because if we ever get
some DDL support, the sequence won't exist until the transaction gets
applied. So we need to ensure the increments don't happen until the
sequence gets created.
To differentiate which sequences are "old" and which were created in a
still-running transaction, we track sequences created in running
transactions in a hash table. Each sequence is identified by it's
relfilenode, and at transaction commit we discard all sequences created
in that particular transaction. For each sequence we track the XID of
the (sub)transaction that created it, and we cleanup the sequences for
each subtransaction when it completes (commit/rollback).
We don't use the XID to check if it's the same top-level transaction.
It's enough to know it was created in an in-progress transaction, and we
know it must be the current one because otherwise it wouldn't see the
sequence object.
The main changes in this patch are:
1) ensure WAL-logging of all necessary info for sequence advances
We need to be able to associate the advance with a XID, but until now
sequence advance might have XID 0 if it was the first thing the
transaction did. So we ensure the transaction has XID.
Note: This is needed because of subxacts. A XID 0 might still have the
sequence created in a different subxact of the same top-level xact.
Then there's a "created" flag added to the xl_seq_rec, which is
necessary to differentiate WAL for created/reset sequences.
2) decoding / queueing of sequences into ReorderBuffer
This is mostly copy-paste of existing code for other decoded events.
3) tracking sequences created in in-progress transactions
We use a simple hash table, indexed by relfilenode.
---
doc/src/sgml/logicaldecoding.sgml | 62 ++-
src/backend/commands/sequence.c | 32 ++
src/backend/replication/logical/decode.c | 131 +++++-
src/backend/replication/logical/logical.c | 88 ++++
.../replication/logical/reorderbuffer.c | 400 ++++++++++++++++++
src/include/commands/sequence.h | 1 +
src/include/replication/output_plugin.h | 27 ++
src/include/replication/reorderbuffer.h | 43 +-
8 files changed, 778 insertions(+), 6 deletions(-)
diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml
index b6353c7a12..60a2b7d1cc 100644
--- a/doc/src/sgml/logicaldecoding.sgml
+++ b/doc/src/sgml/logicaldecoding.sgml
@@ -458,6 +458,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
LogicalDecodeFilterPrepareCB filter_prepare_cb;
@@ -472,6 +473,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
@@ -481,9 +483,12 @@ typedef void (*LogicalOutputPluginInit) (struct OutputPluginCallbacks *cb);
and <function>commit_cb</function> callbacks are required,
while <function>startup_cb</function>,
<function>filter_by_origin_cb</function>, <function>truncate_cb</function>,
+ <function>sequence_cb</function>,
and <function>shutdown_cb</function> are optional.
If <function>truncate_cb</function> is not set but a
<command>TRUNCATE</command> is to be decoded, the action will be ignored.
+ Similarly, if <function>sequence_cb</function> is not set and a sequence
+ change is to be decoded, the action will be ignored.
</para>
<para>
@@ -492,7 +497,8 @@ typedef void (*LogicalOutputPluginInit) (struct OutputPluginCallbacks *cb);
<function>stream_stop_cb</function>, <function>stream_abort_cb</function>,
<function>stream_commit_cb</function>, <function>stream_change_cb</function>,
and <function>stream_prepare_cb</function>
- are required, while <function>stream_message_cb</function> and
+ are required, while <function>stream_message_cb</function>,
+ <function>stream_sequence_cb</function>, and
<function>stream_truncate_cb</function> are optional.
</para>
@@ -808,6 +814,35 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
</para>
</sect3>
+ <sect3 id="logicaldecoding-output-plugin-sequence">
+ <title>Sequence Callback</title>
+
+ <para>
+ The optional <function>sequence_cb</function> callback is called for
+ actions that update a sequence value.
+<programlisting>
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+</programlisting>
+ The <parameter>txn</parameter> parameter contains meta information about
+ the transaction the sequence change is part of. Note however that it can
+ be NULL when the sequence change is non-transactional and the XID was not
+ assigned yet in the transaction which updated the sequence value.
+ The <parameter>sequence_lsn</parameter> has WAL location of the sequence
+ update. The <parameter>transactional</parameter> says if the sequence has
+ to be replayed as part of the transaction or directly.
+
+ The <parameter>last_value</parameter>, <parameter>log_cnt</parameter> and
+ <parameter>is_called</parameter> parameters describe the sequence change.
+ </para>
+ </sect3>
+
<sect3 id="logicaldecoding-output-plugin-filter-prepare">
<title>Prepare Filter Callback</title>
@@ -1017,6 +1052,26 @@ typedef void (*LogicalDecodeStreamMessageCB) (struct LogicalDecodingContext *ctx
</para>
</sect3>
+ <sect3 id="logicaldecoding-output-plugin-stream-sequence">
+ <title>Stream Sequence Callback</title>
+ <para>
+ The optional <function>stream_sequence_cb</function> callback is called
+ for actions that change a sequence in a block of streamed changes
+ (demarcated by <function>stream_start_cb</function> and
+ <function>stream_stop_cb</function> calls).
+<programlisting>
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+</programlisting>
+ </para>
+ </sect3>
+
<sect3 id="logicaldecoding-output-plugin-stream-truncate">
<title>Stream Truncate Callback</title>
<para>
@@ -1197,8 +1252,9 @@ OutputPluginWrite(ctx, true);
in-progress transactions. There are multiple required streaming callbacks
(<function>stream_start_cb</function>, <function>stream_stop_cb</function>,
<function>stream_abort_cb</function>, <function>stream_commit_cb</function>
- and <function>stream_change_cb</function>) and two optional callbacks
- (<function>stream_message_cb</function> and <function>stream_truncate_cb</function>).
+ and <function>stream_change_cb</function>) and multiple optional callbacks
+ (<function>stream_message_cb</function>, <function>stream_sequence_cb</function>,
+ and <function>stream_truncate_cb</function>).
Also, if streaming of two-phase commands is to be supported, then additional
callbacks must be provided. (See <xref linkend="logicaldecoding-two-phase-commits"/>
for details).
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0f309d0a4e..a5435910a2 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -381,8 +381,13 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(rel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
START_CRIT_SECTION();
MarkBufferDirty(buf);
@@ -402,6 +407,7 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = rel->rd_node;
+ xlrec.created = true;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) tuple->t_data, tuple->t_len);
@@ -778,10 +784,28 @@ nextval_internal(Oid relid, bool check_permissions)
* It's sufficient to ensure the toplevel transaction has an xid, no need
* to assign xids subxacts, that'll already trigger an appropriate wait.
* (Have to do that here, so we're outside the critical section)
+ *
+ * We have to ensure we have a proper XID, which will be included in
+ * the XLOG record by XLogRecordAssemble. Otherwise the first nextval()
+ * in a subxact (without any preceding changes) would get XID 0, and it
+ * would then be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence. We only do that with
+ * wal_level=logical, though.
+ *
+ * XXX This might seem unnecessary, because if there's no XID the xact
+ * couldn't have done anything important yet, e.g. it could not have
+ * created a sequence. But that's incorrect, as it ignores subxacts. The
+ * current subtransaction might not have done anything yet (thus no XID),
+ * but an earlier one might have created the sequence.
*/
if (logit && RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -817,6 +841,7 @@ nextval_internal(Oid relid, bool check_permissions)
seq->log_cnt = 0;
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -991,8 +1016,13 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -1013,6 +1043,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
+
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 59aed6cee6..6965be53ca 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -42,6 +42,7 @@
#include "replication/reorderbuffer.h"
#include "replication/snapbuild.h"
#include "storage/standby.h"
+#include "commands/sequence.h"
typedef struct XLogRecordBuffer
{
@@ -74,10 +75,11 @@ static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
bool two_phase);
static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
xl_xact_parsed_prepare *parsed);
-
+static void DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
/* common function to decode tuples */
static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);
+static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);
/* helper functions for decoding transactions */
static inline bool FilterPrepare(LogicalDecodingContext *ctx,
@@ -158,6 +160,10 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
DecodeLogicalMsgOp(ctx, &buf);
break;
+ case RM_SEQ_ID:
+ DecodeSequence(ctx, &buf);
+ break;
+
/*
* Rmgrs irrelevant for logical decoding; they describe stuff not
* represented in logical decoding. Add new rmgrs in rmgrlist.h's
@@ -173,7 +179,6 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor
case RM_HASH_ID:
case RM_GIN_ID:
case RM_GIST_ID:
- case RM_SEQ_ID:
case RM_SPGIST_ID:
case RM_BRIN_ID:
case RM_COMMIT_TS_ID:
@@ -1313,3 +1318,125 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
(txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
+
+/*
+ * Decode Sequence Tuple
+ */
+static void
+DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
+{
+ int datalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;
+
+ Assert(datalen >= 0);
+
+ tuple->tuple.t_len = datalen + SizeofHeapTupleHeader;;
+
+ ItemPointerSetInvalid(&tuple->tuple.t_self);
+
+ tuple->tuple.t_tableOid = InvalidOid;
+
+ memcpy(((char *) tuple->tuple.t_data),
+ data + sizeof(xl_seq_rec),
+ SizeofHeapTupleHeader);
+
+ memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,
+ data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,
+ datalen);
+}
+
+/*
+ * Handle sequence decode
+ *
+ * Decoding sequences is a bit tricky, because while most sequence actions
+ * are non-transactional (not subject to rollback), some need to be handled
+ * as transactional.
+ *
+ * By default, a sequence increment is non-transactional - we must not queue
+ * it in a transaction as other changes, because the transaction might get
+ * rolled back and we'd discard the increment. The downstream would not be
+ * notified about the increment, which is wrong.
+ *
+ * On the other hand, the sequence may be created in a transaction. In this
+ * case we *should* queue the change as other changes in the transaction,
+ * because we don't want to send the increments for unknown sequence to the
+ * plugin - it might get confused about which sequence it's related to etc.
+ */
+static void
+DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ SnapBuild *builder = ctx->snapshot_builder;
+ ReorderBufferTupleBuf *tuplebuf;
+ RelFileNode target_node;
+ XLogReaderState *r = buf->record;
+ char *tupledata = NULL;
+ Size tuplelen;
+ Size datalen = 0;
+ TransactionId xid = XLogRecGetXid(r);
+ uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
+ xl_seq_rec *xlrec;
+ Snapshot snapshot;
+ RepOriginId origin_id = XLogRecGetOrigin(r);
+ bool transactional;
+
+ /* only decode changes flagged with XLOG_SEQ_LOG */
+ if (info != XLOG_SEQ_LOG)
+ elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
+
+ /*
+ * If we don't have snapshot or we are just fast-forwarding, there is no
+ * point in decoding messages.
+ */
+ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||
+ ctx->fast_forward)
+ return;
+
+ /* only interested in our database */
+ XLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);
+ if (target_node.dbNode != ctx->slot->data.database)
+ return;
+
+ /* output plugin doesn't look for this origin, no need to queue */
+ if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
+ return;
+
+ tupledata = XLogRecGetData(r);
+ datalen = XLogRecGetDataLen(r);
+ tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);
+
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_seq_rec *) XLogRecGetData(r);
+
+ /* XXX how could we have sequence change without data? */
+ if(!datalen || !tupledata)
+ return;
+
+ tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
+ DecodeSeqTuple(tupledata, datalen, tuplebuf);
+
+ /*
+ * Should we handle the sequence increment as transactional or not?
+ *
+ * If the sequence was created in a still-running transaction, treat
+ * it as transactional and queue the increments. Otherwise it needs
+ * to be treated as non-transactional, in which case we send it to
+ * the plugin right away.
+ */
+ transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
+ target_node,
+ xlrec->created);
+
+ /* Skip the change if already processed (per the snapshot). */
+ if (transactional &&
+ !SnapBuildProcessChange(builder, xid, buf->origptr))
+ return;
+ else if (!transactional &&
+ (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
+ SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ return;
+
+ /* Queue the increment (or send immediately if not transactional). */
+ snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
+ ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
+ origin_id, target_node, transactional,
+ xlrec->created, tuplebuf);
+}
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index 10cbdea124..aab5e1226d 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -73,6 +73,10 @@ static void truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
static void message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
/* streaming callbacks */
static void stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
@@ -90,6 +94,10 @@ static void stream_change_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn
static void stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static void stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[], ReorderBufferChange *change);
@@ -218,6 +226,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->apply_truncate = truncate_cb_wrapper;
ctx->reorder->commit = commit_cb_wrapper;
ctx->reorder->message = message_cb_wrapper;
+ ctx->reorder->sequence = sequence_cb_wrapper;
/*
* To support streaming, we require start/stop/abort/commit/change
@@ -234,6 +243,7 @@ StartupDecodingContext(List *output_plugin_options,
(ctx->callbacks.stream_commit_cb != NULL) ||
(ctx->callbacks.stream_change_cb != NULL) ||
(ctx->callbacks.stream_message_cb != NULL) ||
+ (ctx->callbacks.stream_sequence_cb != NULL) ||
(ctx->callbacks.stream_truncate_cb != NULL);
/*
@@ -251,6 +261,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->stream_commit = stream_commit_cb_wrapper;
ctx->reorder->stream_change = stream_change_cb_wrapper;
ctx->reorder->stream_message = stream_message_cb_wrapper;
+ ctx->reorder->stream_sequence = stream_sequence_cb_wrapper;
ctx->reorder->stream_truncate = stream_truncate_cb_wrapper;
@@ -1203,6 +1214,42 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ if (ctx->callbacks.sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr first_lsn)
@@ -1508,6 +1555,47 @@ stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ /* We're only supposed to call this when streaming is supported. */
+ Assert(ctx->streaming);
+
+ /* this callback is optional */
+ if (ctx->callbacks.stream_sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "stream_sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[],
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 7aa5647a2c..6b6d0d9b95 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -77,6 +77,35 @@
* a bit more memory to the oldest subtransactions, because it's likely
* they are the source for the next sequence of changes.
*
+ * When decoding sequences, we differentiate between a sequences created
+ * in a (running) transaction, and sequences created in other (already
+ * committed) transactions. Changes for sequences created in the same
+ * top-level transaction are treated as "transactional" i.e. just like
+ * any other change from that transaction (and discarded in case of a
+ * rollback). Changes for sequences created earlier are treated as not
+ * transactional - are processed immediately, as if performed outside
+ * any transaction (and thus not rolled back).
+ *
+ * This mixed behavior is necessary - sequences are non-transactional
+ * (e.g. ROLLBACK does not undo the sequence increments). But for new
+ * sequences, we need to handle them in a transactional way, because if
+ * we ever get some DDL support, the sequence won't exist until the
+ * transaction gets applied. So we need to ensure the increments don't
+ * happen until the sequence gets created.
+ *
+ * To differentiate which sequences are "old" and which were created
+ * in a still-running transaction, we track sequences created in running
+ * transactions in a hash table. Each sequence is identified by it's
+ * relfilenode, and at transaction commit we discard all sequences
+ * created in that particular transaction. For each sequence we track
+ * the XID of the (sub)transaction that created it, and we cleanup the
+ * sequences for each subtransaction when it completes (commit/rollback).
+ *
+ * We don't use the XID to check if it's the same top-level transaction.
+ * It's enough to know it was created in an in-progress transaction,
+ * and we know it must be the current one because otherwise it wouldn't
+ * see the sequence object.
+ *
* -------------------------------------------------------------------------
*/
#include "postgres.h"
@@ -91,6 +120,7 @@
#include "access/xact.h"
#include "access/xlog_internal.h"
#include "catalog/catalog.h"
+#include "commands/sequence.h"
#include "lib/binaryheap.h"
#include "miscadmin.h"
#include "pgstat.h"
@@ -116,6 +146,13 @@ typedef struct ReorderBufferTXNByIdEnt
ReorderBufferTXN *txn;
} ReorderBufferTXNByIdEnt;
+/* entry for hash table we use to track sequences created in running xacts */
+typedef struct ReorderBufferSequenceEnt
+{
+ RelFileNode rnode;
+ TransactionId xid;
+} ReorderBufferSequenceEnt;
+
/* data structures for (relfilenode, ctid) => (cmin, cmax) mapping */
typedef struct ReorderBufferTupleCidKey
{
@@ -339,6 +376,14 @@ ReorderBufferAllocate(void)
buffer->by_txn = hash_create("ReorderBufferByXid", 1000, &hash_ctl,
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+ /* hash table of sequences, mapping relfilenode to XID of transaction */
+ hash_ctl.keysize = sizeof(RelFileNode);
+ hash_ctl.entrysize = sizeof(ReorderBufferSequenceEnt);
+ hash_ctl.hcxt = buffer->context;
+
+ buffer->sequences = hash_create("ReorderBufferSequenceHash", 1000, &hash_ctl,
+ HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+
buffer->by_txn_last_xid = InvalidTransactionId;
buffer->by_txn_last_txn = NULL;
@@ -525,6 +570,13 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
change->data.truncate.relids = NULL;
}
break;
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
+ change->data.sequence.tuple = NULL;
+ }
+ break;
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
@@ -859,6 +911,230 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
}
}
+/*
+ * Treat the sequence increment as transactional?
+ *
+ * The hash table tracks all sequences created in in-progress transactions,
+ * so we simply do a lookup (the sequence is identified by relfilende). If
+ * we find a match, the increment should be handled as transactional.
+ */
+bool
+ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created)
+{
+ bool found = false;
+
+ if (created)
+ return true;
+
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND,
+ &found);
+
+ return found;
+}
+
+/*
+ * Cleanup sequences created in in-progress transactions.
+ *
+ * There's no way to search by XID, so we simply do a seqscan of all
+ * the entries in the hash table. Hopefully there are only a couple
+ * entries in most cases - people generally don't create many new
+ * sequences over and over.
+ */
+static void
+ReorderBufferSequenceCleanup(ReorderBuffer *rb, TransactionId xid)
+{
+ HASH_SEQ_STATUS scan_status;
+ ReorderBufferSequenceEnt *ent;
+
+ hash_seq_init(&scan_status, rb->sequences);
+ while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
+ {
+ /* skip sequences not from this transaction */
+ if (ent->xid != xid)
+ continue;
+
+ (void) hash_search(rb->sequences,
+ (void *) &(ent->rnode),
+ HASH_REMOVE, NULL);
+ }
+}
+
+/*
+ * A transactional sequence increment is queued to be processed upon commit
+ * and a non-transactional increment gets processed immediately.
+ *
+ * A sequence update may be both transactional and non-transactional. When
+ * created in a running transaction, treat it as transactional and queue
+ * the change in it. Otherwise treat it as non-transactional, so that we
+ * don't forget the increment in case of a rollback.
+ */
+void
+ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf)
+{
+ /*
+ * Change needs to be handled as transactional, because the sequence was
+ * created in a transaction that is still running. In that case all the
+ * changes need to be queued in that transaction, we must not send them
+ * to the downstream until the transaction commits.
+ *
+ * There's a bit of a trouble with subtransactions - we can't queue it
+ * into the subxact, because it might be rolled back and we'd lose the
+ * increment. We need to queue it into the same (sub)xact that created
+ * the sequence, which is why we track the XID in the hash table.
+ */
+ if (transactional)
+ {
+ MemoryContext oldcontext;
+ ReorderBufferChange *change;
+
+ /* lookup sequence by relfilenode */
+ ReorderBufferSequenceEnt *ent;
+ bool found;
+
+ /* transactional changes require a transaction */
+ Assert(xid != InvalidTransactionId);
+
+ /* search the lookup table (we ignore the return value, found is enough) */
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ created ? HASH_ENTER : HASH_FIND,
+ &found);
+
+ /*
+ * If this is the "create" increment, we must not have found any
+ * pre-existing entry in the hash table (i.e. there must not be
+ * any conflicting sequence).
+ */
+ Assert(!(created && found));
+
+ /* But we must have either created or found an existing entry. */
+ Assert(created || found);
+
+ /*
+ * When creating the sequence, remember the XID of the transaction
+ * that created id.
+ */
+ if (created)
+ ent->xid = xid;
+
+ /* XXX Maybe check that we're still in the same top-level xact? */
+
+ /* OK, allocate and queue the change */
+ oldcontext = MemoryContextSwitchTo(rb->context);
+
+ change = ReorderBufferGetChange(rb);
+
+ change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
+ change->origin_id = origin_id;
+
+ memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
+
+ change->data.sequence.tuple = tuplebuf;
+
+ /* add it to the same subxact that created the sequence */
+ ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+ else
+ {
+ /*
+ * This increment is for a sequence that was not created in any
+ * running transaction, so we treat it as non-transactional and
+ * just send it to the output plugin directly.
+ */
+ ReorderBufferTXN *txn = NULL;
+ volatile Snapshot snapshot_now = snapshot;
+ bool using_subtxn;
+
+#ifdef USE_ASSERT_CHECKING
+ /* All "creates" have to be handled as transactional. */
+ Assert(!created);
+
+ /* Make sure the sequence is not in the hash table. */
+ {
+ bool found;
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND, &found);
+ Assert(!found);
+ }
+#endif
+
+ if (xid != InvalidTransactionId)
+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
+
+ /* setup snapshot to allow catalog access */
+ SetupHistoricSnapshot(snapshot_now, NULL);
+
+ /*
+ * Decoding needs access to syscaches et al., which in turn use
+ * heavyweight locks and such. Thus we need to have enough state around to
+ * keep track of those. The easiest way is to simply use a transaction
+ * internally. That also allows us to easily enforce that nothing writes
+ * to the database by checking for xid assignments.
+ *
+ * When we're called via the SQL SRF there's already a transaction
+ * started, so start an explicit subtransaction there.
+ */
+ using_subtxn = IsTransactionOrTransactionBlock();
+
+ PG_TRY();
+ {
+ Relation relation;
+ HeapTuple tuple;
+ Form_pg_sequence_data seq;
+ Oid reloid;
+
+ if (using_subtxn)
+ BeginInternalSubTransaction("sequence");
+ else
+ StartTransactionCommand();
+
+ reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(rnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+ tuple = &tuplebuf->tuple;
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+
+ rb->sequence(rb, txn, lsn, relation, transactional,
+ seq->last_value, seq->log_cnt, seq->is_called);
+
+ RelationClose(relation);
+
+ TeardownHistoricSnapshot(false);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+ }
+ PG_CATCH();
+ {
+ TeardownHistoricSnapshot(true);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+ }
+}
+
/*
* AssertTXNLsnOrder
* Verify LSN ordering of transaction lists in the reorderbuffer
@@ -1535,6 +1811,9 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
&found);
Assert(found);
+ /* Remove sequences created in this transaction (if any). */
+ ReorderBufferSequenceCleanup(rb, txn->xid);
+
/* remove entries spilled to disk */
if (rbtxn_is_serialized(txn))
ReorderBufferRestoreCleanup(rb, txn);
@@ -1950,6 +2229,29 @@ ReorderBufferApplyMessage(ReorderBuffer *rb, ReorderBufferTXN *txn,
change->data.msg.message);
}
+/*
+ * Helper function for ReorderBufferProcessTXN for applying sequences.
+ */
+static inline void
+ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ Relation relation, ReorderBufferChange *change,
+ bool streaming)
+{
+ HeapTuple tuple;
+ Form_pg_sequence_data seq;
+
+ tuple = &change->data.sequence.tuple->tuple;
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+
+ /* Only ever called from ReorderBufferApplySequence, so transational. */
+ if (streaming)
+ rb->stream_sequence(rb, txn, change->lsn, relation, true,
+ seq->last_value, seq->log_cnt, seq->is_called);
+ else
+ rb->sequence(rb, txn, change->lsn, relation, true,
+ seq->last_value, seq->log_cnt, seq->is_called);
+}
+
/*
* Function to store the command id and snapshot at the end of the current
* stream so that we can reuse the same while sending the next stream.
@@ -2392,6 +2694,31 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
elog(ERROR, "tuplecid value in changequeue");
break;
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ Assert(snapshot_now);
+
+ reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
+ change->data.sequence.relnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+
+ if (!RelationIsValid(relation))
+ elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
+ reloid,
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ if (RelationIsLogicallyLogged(relation))
+ ReorderBufferApplySequence(rb, txn, relation, change, streaming);
+
+ RelationClose(relation);
+ break;
}
}
@@ -3776,6 +4103,39 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ char *data;
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
+ /* make sure we have enough space */
+ ReorderBufferSerializeReserve(rb, sz);
+
+ data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
+ /* might have been reallocated above */
+ ondisk = (ReorderBufferDiskChange *) rb->outbuf;
+
+ if (len)
+ {
+ memcpy(data, &tup->tuple, sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ memcpy(data, tup->tuple.t_data, len);
+ data += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4040,6 +4400,22 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4341,6 +4717,30 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ uint32 tuplelen = ((HeapTuple) data)->t_len;
+
+ change->data.sequence.tuple =
+ ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
+
+ /* restore ->tuple */
+ memcpy(&change->data.sequence.tuple->tuple, data,
+ sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ /* reset t_data pointer into the new tuplebuf */
+ change->data.sequence.tuple->tuple.t_data =
+ ReorderBufferTupleBufData(change->data.sequence.tuple);
+
+ /* restore tuple data itself */
+ memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
+ data += tuplelen;
+ }
+ break;
+
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 40544dd4c7..5919fb90ee 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -48,6 +48,7 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
typedef struct xl_seq_rec
{
RelFileNode node;
+ bool created; /* is this a CREATE SEQUENCE */
/* SEQUENCE TUPLE DATA FOLLOWS AT THE END */
} xl_seq_rec;
diff --git a/src/include/replication/output_plugin.h b/src/include/replication/output_plugin.h
index 810495ed0e..43b8a63ee4 100644
--- a/src/include/replication/output_plugin.h
+++ b/src/include/replication/output_plugin.h
@@ -88,6 +88,18 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
Size message_size,
const char *message);
+/*
+ * Called for the generic logical decoding sequences.
+ */
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Filter changes by origin.
*/
@@ -199,6 +211,19 @@ typedef void (*LogicalDecodeStreamMessageCB) (struct LogicalDecodingContext *ctx
Size message_size,
const char *message);
+/*
+ * Called for the streaming generic logical decoding sequences from in-progress
+ * transactions.
+ */
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Callback for streaming truncates from in-progress transactions.
*/
@@ -219,6 +244,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
@@ -237,6 +263,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index 5b40ff75f7..ab469fb210 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -64,7 +64,8 @@ enum ReorderBufferChangeType
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT,
- REORDER_BUFFER_CHANGE_TRUNCATE
+ REORDER_BUFFER_CHANGE_TRUNCATE,
+ REORDER_BUFFER_CHANGE_SEQUENCE
};
/* forward declaration */
@@ -158,6 +159,13 @@ typedef struct ReorderBufferChange
uint32 ninvalidations; /* Number of messages */
SharedInvalidationMessage *invalidations; /* invalidation message */
} inval;
+
+ /* Context data for Sequence changes */
+ struct
+ {
+ RelFileNode relnode;
+ ReorderBufferTupleBuf *tuple;
+ } sequence;
} data;
/*
@@ -430,6 +438,15 @@ typedef void (*ReorderBufferMessageCB) (ReorderBuffer *rb,
const char *prefix, Size sz,
const char *message);
+/* sequence callback signature */
+typedef void (*ReorderBufferSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* begin prepare callback signature */
typedef void (*ReorderBufferBeginPrepareCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn);
@@ -496,6 +513,15 @@ typedef void (*ReorderBufferStreamMessageCB) (
const char *prefix, Size sz,
const char *message);
+/* stream sequence callback signature */
+typedef void (*ReorderBufferStreamSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* stream truncate callback signature */
typedef void (*ReorderBufferStreamTruncateCB) (
ReorderBuffer *rb,
@@ -511,6 +537,12 @@ struct ReorderBuffer
*/
HTAB *by_txn;
+ /*
+ * relfilenode => XID lookup table for sequences created in a transaction
+ * (also includes altered sequences, which assigns new relfilenode)
+ */
+ HTAB *sequences;
+
/*
* Transactions that could be a toplevel xact, ordered by LSN of the first
* record bearing that xid.
@@ -541,6 +573,7 @@ struct ReorderBuffer
ReorderBufferApplyTruncateCB apply_truncate;
ReorderBufferCommitCB commit;
ReorderBufferMessageCB message;
+ ReorderBufferSequenceCB sequence;
/*
* Callbacks to be called when streaming a transaction at prepare time.
@@ -560,6 +593,7 @@ struct ReorderBuffer
ReorderBufferStreamCommitCB stream_commit;
ReorderBufferStreamChangeCB stream_change;
ReorderBufferStreamMessageCB stream_message;
+ ReorderBufferStreamSequenceCB stream_sequence;
ReorderBufferStreamTruncateCB stream_truncate;
/*
@@ -635,6 +669,10 @@ void ReorderBufferQueueChange(ReorderBuffer *, TransactionId,
void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapshot, XLogRecPtr lsn,
bool transactional, const char *prefix,
Size message_size, const char *message);
+void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
@@ -682,4 +720,7 @@ void ReorderBufferSetRestartPoint(ReorderBuffer *, XLogRecPtr ptr);
void StartupReorderBuffer(void);
+bool ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created);
+
#endif
--
2.31.1
0003-Add-support-for-decoding-sequences-to-test_-20211222.patchtext/x-patch; charset=UTF-8; name=0003-Add-support-for-decoding-sequences-to-test_-20211222.patchDownload
From 4b249e65b55d5e9fb7b08135ead68bfa3c58034c Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 24 Sep 2021 00:42:04 +0200
Subject: [PATCH 3/6] Add support for decoding sequences to test_decoding
---
contrib/test_decoding/Makefile | 3 +-
contrib/test_decoding/expected/sequence.out | 327 ++++++++++++++++++++
contrib/test_decoding/sql/sequence.sql | 119 +++++++
contrib/test_decoding/test_decoding.c | 69 +++++
4 files changed, 517 insertions(+), 1 deletion(-)
create mode 100644 contrib/test_decoding/expected/sequence.out
create mode 100644 contrib/test_decoding/sql/sequence.sql
diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index 9a31e0b879..56ddc3abae 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -5,7 +5,8 @@ PGFILEDESC = "test_decoding - example of a logical decoding output plugin"
REGRESS = ddl xact rewrite toast permissions decoding_in_xact \
decoding_into_rel binary prepared replorigin time messages \
- spill slot truncate stream stats twophase twophase_stream
+ spill slot truncate stream stats twophase twophase_stream \
+ sequence
ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \
oldest_xmin snapshot_transfer subxact_without_top concurrent_stream \
twophase_snapshot
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
new file mode 100644
index 0000000000..c3c2a18a35
--- /dev/null
+++ b/contrib/test_decoding/expected/sequence.out
@@ -0,0 +1,327 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+ ?column?
+----------
+ init
+(1 row)
+
+CREATE SEQUENCE test_sequence;
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4
+(1 row)
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 14
+(1 row)
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4000
+(1 row)
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, true);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, false);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3500
+(1 row)
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+--------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:1 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value:33 log_cnt:0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:4 log_cnt:0 is_called:1
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value:334 log_cnt:0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:14 log_cnt:32 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:14 log_cnt:0 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:4000 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value:4320 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3500 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3500 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3500 log_cnt:0 is_called:0
+ sequence public.test_sequence: transactional:0 last_value:3820 log_cnt:0 is_called:1
+(24 rows)
+
+DROP SEQUENCE test_sequence;
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3002
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3003
+(1 row)
+
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 6001
+(1 row)
+
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3003 | 30 | t
+(1 row)
+
+SELECT nextval('test_table_a_seq');
+ nextval
+---------
+ 3004
+(1 row)
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+-----------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 last_value:1 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_table_a_seq: transactional:0 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value:3000 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value:3033 log_cnt:0 is_called:1
+ BEGIN
+ COMMIT
+(8 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+---------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 last_value:1 log_cnt:0 is_called:0
+ sequence public.test_table_a_seq: transactional:1 last_value:33 log_cnt:0 is_called:1
+ table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ COMMIT
+(6 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ setval
+--------
+ 5000
+(1 row)
+
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3001 | 32 | t
+(1 row)
+
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+--------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:1 log_cnt:0 is_called:0
+ sequence public.test_sequence: transactional:1 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 last_value:3000 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 last_value:3033 log_cnt:0 is_called:1
+ COMMIT
+(6 rows)
+
+SELECT pg_drop_replication_slot('regression_slot');
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
new file mode 100644
index 0000000000..21c4b79222
--- /dev/null
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -0,0 +1,119 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+
+CREATE SEQUENCE test_sequence;
+
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, true);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, false);
+SELECT nextval('test_sequence');
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+DROP SEQUENCE test_sequence;
+
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+SELECT nextval('test_table_a_seq');
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index e5cd84e85e..bc3250b8f4 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -35,6 +35,7 @@ typedef struct
bool include_timestamp;
bool skip_empty_xacts;
bool only_local;
+ bool skip_sequences;
} TestDecodingData;
/*
@@ -76,6 +77,10 @@ static void pg_decode_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
TransactionId xid,
const char *gid);
@@ -116,6 +121,10 @@ static void pg_decode_stream_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_stream_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static void pg_decode_stream_truncate(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
int nrelations, Relation relations[],
@@ -141,6 +150,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->filter_by_origin_cb = pg_decode_filter;
cb->shutdown_cb = pg_decode_shutdown;
cb->message_cb = pg_decode_message;
+ cb->sequence_cb = pg_decode_sequence;
cb->filter_prepare_cb = pg_decode_filter_prepare;
cb->begin_prepare_cb = pg_decode_begin_prepare_txn;
cb->prepare_cb = pg_decode_prepare_txn;
@@ -153,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pg_decode_stream_commit;
cb->stream_change_cb = pg_decode_stream_change;
cb->stream_message_cb = pg_decode_stream_message;
+ cb->stream_sequence_cb = pg_decode_stream_sequence;
cb->stream_truncate_cb = pg_decode_stream_truncate;
}
@@ -175,6 +186,9 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
data->skip_empty_xacts = false;
data->only_local = false;
+ /* skip sequences by default for backwards compatibility */
+ data->skip_sequences = true;
+
ctx->output_plugin_private = data;
opt->output_type = OUTPUT_PLUGIN_TEXTUAL_OUTPUT;
@@ -265,6 +279,17 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "skip-sequences") == 0)
+ {
+
+ if (elem->arg == NULL)
+ continue; /* true by default */
+ else if (!parse_bool(strVal(elem->arg), &data->skip_sequences))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not parse value \"%s\" for parameter \"%s\"",
+ strVal(elem->arg), elem->defname)));
+ }
else
{
ereport(ERROR,
@@ -744,6 +769,28 @@ pg_decode_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d last_value:%zu log_cnt:%zu is_called:%d",
+ transactional, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
static void
pg_decode_stream_start(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn)
@@ -943,6 +990,28 @@ pg_decode_stream_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_stream_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "streaming sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d last_value:%zu log_cnt:%zu is_called:%d",
+ transactional, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* In streaming mode, we don't display the detailed information of Truncate.
* See pg_decode_stream_change.
--
2.31.1
0004-tweak-test_decoding-tests-20211222.patchtext/x-patch; charset=UTF-8; name=0004-tweak-test_decoding-tests-20211222.patchDownload
From ee0f4ffe25d6ced57cd2822616ceb131407aeb5d Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Wed, 22 Dec 2021 03:53:10 +0100
Subject: [PATCH 4/6] tweak test_decoding tests
---
contrib/test_decoding/expected/sequence.out | 48 +++++++++++++--------
1 file changed, 29 insertions(+), 19 deletions(-)
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
index c3c2a18a35..971373e083 100644
--- a/contrib/test_decoding/expected/sequence.out
+++ b/contrib/test_decoding/expected/sequence.out
@@ -93,13 +93,16 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'inc
BEGIN
sequence public.test_sequence: transactional:1 last_value:1 log_cnt:0 is_called:0
COMMIT
- sequence public.test_sequence: transactional:0 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:1 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:2 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:4 log_cnt:0 is_called:1
BEGIN
sequence public.test_sequence: transactional:1 last_value:4 log_cnt:0 is_called:1
COMMIT
- sequence public.test_sequence: transactional:0 last_value:334 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:14 log_cnt:0 is_called:1
BEGIN
- sequence public.test_sequence: transactional:1 last_value:14 log_cnt:32 is_called:1
+ sequence public.test_sequence: transactional:1 last_value:14 log_cnt:0 is_called:1
COMMIT
BEGIN
sequence public.test_sequence: transactional:1 last_value:14 log_cnt:0 is_called:1
@@ -107,14 +110,14 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'inc
BEGIN
sequence public.test_sequence: transactional:1 last_value:4000 log_cnt:0 is_called:0
COMMIT
- sequence public.test_sequence: transactional:0 last_value:4320 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:4000 log_cnt:0 is_called:1
sequence public.test_sequence: transactional:0 last_value:3500 log_cnt:0 is_called:1
- sequence public.test_sequence: transactional:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3510 log_cnt:0 is_called:1
sequence public.test_sequence: transactional:0 last_value:3500 log_cnt:0 is_called:1
- sequence public.test_sequence: transactional:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3510 log_cnt:0 is_called:1
sequence public.test_sequence: transactional:0 last_value:3500 log_cnt:0 is_called:0
- sequence public.test_sequence: transactional:0 last_value:3820 log_cnt:0 is_called:1
-(24 rows)
+ sequence public.test_sequence: transactional:0 last_value:3500 log_cnt:0 is_called:1
+(27 rows)
DROP SEQUENCE test_sequence;
-- rollback on sequence creation and update
@@ -226,7 +229,7 @@ ROLLBACK;
SELECT * from test_table_a_seq;
last_value | log_cnt | is_called
------------+---------+-----------
- 3003 | 30 | t
+ 3003 | 0 | t
(1 row)
SELECT nextval('test_table_a_seq');
@@ -242,12 +245,17 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'inc
BEGIN
sequence public.test_table_a_seq: transactional:1 last_value:1 log_cnt:0 is_called:0
COMMIT
- sequence public.test_table_a_seq: transactional:0 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value:1 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value:2 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value:3 log_cnt:0 is_called:1
sequence public.test_table_a_seq: transactional:0 last_value:3000 log_cnt:0 is_called:1
- sequence public.test_table_a_seq: transactional:0 last_value:3033 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value:3001 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value:3002 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value:3003 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value:3004 log_cnt:0 is_called:1
BEGIN
COMMIT
-(8 rows)
+(13 rows)
-- savepoint test on table with serial column
BEGIN;
@@ -260,15 +268,17 @@ ROLLBACK TO SAVEPOINT a;
DROP TABLE test_table;
COMMIT;
SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
- data
----------------------------------------------------------------------------------------
+ data
+--------------------------------------------------------------------------------------
BEGIN
sequence public.test_table_a_seq: transactional:1 last_value:1 log_cnt:0 is_called:0
- sequence public.test_table_a_seq: transactional:1 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:1 last_value:1 log_cnt:0 is_called:1
table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ sequence public.test_table_a_seq: transactional:1 last_value:2 log_cnt:0 is_called:1
table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ sequence public.test_table_a_seq: transactional:1 last_value:3 log_cnt:0 is_called:1
COMMIT
-(6 rows)
+(8 rows)
-- savepoint test on table with serial column
BEGIN;
@@ -303,7 +313,7 @@ ROLLBACK TO SAVEPOINT a;
SELECT * FROM test_sequence;
last_value | log_cnt | is_called
------------+---------+-----------
- 3001 | 32 | t
+ 3001 | 0 | t
(1 row)
DROP SEQUENCE test_sequence;
@@ -313,9 +323,9 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'inc
--------------------------------------------------------------------------------------
BEGIN
sequence public.test_sequence: transactional:1 last_value:1 log_cnt:0 is_called:0
- sequence public.test_sequence: transactional:1 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 last_value:1 log_cnt:0 is_called:1
sequence public.test_sequence: transactional:1 last_value:3000 log_cnt:0 is_called:1
- sequence public.test_sequence: transactional:1 last_value:3033 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 last_value:3001 log_cnt:0 is_called:1
COMMIT
(6 rows)
--
2.31.1
0005-Add-support-for-decoding-sequences-to-built-20211222.patchtext/x-patch; charset=UTF-8; name=0005-Add-support-for-decoding-sequences-to-built-20211222.patchDownload
From 4eab8773976137234e9fec46b0bc01b5fdde7e79 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas@2ndquadrant.com>
Date: Tue, 14 Dec 2021 00:29:09 +0100
Subject: [PATCH 5/6] Add support for decoding sequences to built-in
replication
---
doc/src/sgml/catalogs.sgml | 71 ++++
doc/src/sgml/ref/alter_publication.sgml | 24 +-
doc/src/sgml/ref/alter_subscription.sgml | 4 +-
src/backend/catalog/pg_publication.c | 149 ++++++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 350 +++++++++++++++++++-
src/backend/commands/sequence.c | 79 +++++
src/backend/commands/subscriptioncmds.c | 272 +++++++++++++++
src/backend/executor/execReplication.c | 2 +-
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 1 +
src/backend/parser/gram.y | 32 ++
src/backend/replication/logical/proto.c | 52 +++
src/backend/replication/logical/tablesync.c | 118 ++++++-
src/backend/replication/logical/worker.c | 60 ++++
src/backend/replication/pgoutput/pgoutput.c | 85 ++++-
src/backend/utils/cache/relcache.c | 4 +-
src/bin/psql/tab-complete.c | 6 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 14 +
src/include/commands/sequence.h | 1 +
src/include/nodes/parsenodes.h | 6 +
src/include/replication/logicalproto.h | 19 ++
src/include/replication/pgoutput.h | 1 +
src/test/regress/expected/rules.out | 8 +
src/test/subscription/t/027_sequences.pl | 196 +++++++++++
26 files changed, 1538 insertions(+), 32 deletions(-)
create mode 100644 src/test/subscription/t/027_sequences.pl
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 03e2537b07..0dfeb36f03 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -9514,6 +9514,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11349,6 +11354,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index bb4ef5e5e2..4d166ad3f9 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCE IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -56,7 +58,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -123,6 +136,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 0b027cc346..8f28cf03f4 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -147,7 +147,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ replication of tables and sequences that were added to the subscribed-to publications
since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -164,7 +164,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
Specifies whether to copy pre-existing data in the publications
that are being subscribed to when the replication starts.
The default is <literal>true</literal>. (Previously-subscribed
- tables are not copied.)
+ tables and sequences are not copied.)
</para>
</listitem>
</varlistentry>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 62f10bcbd2..6844ac6189 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -54,7 +54,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -131,7 +132,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -502,6 +504,11 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ /* skip sequences here */
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ continue;
+
result = GetPubPartitionOptionRelations(result, pub_partopt,
pubrel->prrelid);
}
@@ -516,6 +523,49 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of relation oids for a publication (sequences only).
+ *
+ * This should only be used for normal publications, the FOR ALL TABLES
+ * should use GetAllSequencesPublicationRelations().
+ */
+List *
+GetPublicationSequenceRelations(Oid pubid)
+{
+ List *result;
+ Relation pubrelsrel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications associated with the relation. */
+ pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_rel_prpubid,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(pubid));
+
+ scan = systable_beginscan(pubrelsrel, PublicationRelPrrelidPrpubidIndexId,
+ true, NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Form_pg_publication_rel pubrel;
+
+ pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ }
+
+ systable_endscan(scan);
+ table_close(pubrelsrel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of publication oids for publications marked as FOR ALL TABLES.
*/
@@ -761,6 +811,46 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -783,10 +873,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -936,3 +1028,56 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ sequences = GetPublicationSequenceRelations(publication->oid);
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 61b515cdb8..0df91d7a5a 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -372,6 +372,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPT.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 404bb5d0c8..2c183d63f7 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -59,6 +60,12 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
AlterPublicationStmt *stmt);
static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static List *OpenSequenceList(List *sequences);
+static void CloseSequenceList(List *rels);
+static void PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt);
+static void PublicationDropSequences(Oid pubid, List *rels, bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
List *options,
@@ -77,6 +84,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -101,6 +109,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -123,6 +132,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -149,7 +160,9 @@ parse_publication_options(ParseState *pstate,
*/
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas,
+ List **schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -167,12 +180,23 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLE_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE_IN_SCHEMA:
+ schemaid = get_namespace_oid(pubobj->name, false);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLE_IN_CUR_SCHEMA:
@@ -186,6 +210,21 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
list_free(search_path);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE_IN_CUR_SCHEMA:
+ search_path = fetch_search_path(false);
+ if (search_path == NIL) /* nothing valid in search_path? */
+ ereport(ERROR,
+ errcode(ERRCODE_UNDEFINED_SCHEMA),
+ errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+ schemaid = linitial_oid(search_path);
+ list_free(search_path);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
default:
@@ -251,7 +290,10 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
bool publish_via_partition_root_given;
bool publish_via_partition_root;
AclResult aclresult;
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
/* must have CREATE privilege on database */
@@ -306,6 +348,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -330,26 +374,40 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
}
else
{
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
- if (list_length(schemaidlist) > 0 && !superuser())
+ if (list_length(tables_schemaidlist) > 0 && !superuser())
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES IN SCHEMA publication"));
- if (list_length(relations) > 0)
+ if (list_length(tables) > 0)
{
List *rels;
- rels = OpenTableList(relations);
- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ rels = OpenTableList(tables);
+ CheckObjSchemaNotAlreadyInPublication(rels, tables_schemaidlist,
PUBLICATIONOBJ_TABLE);
PublicationAddTables(puboid, rels, true, NULL);
CloseTableList(rels);
}
+ if (list_length(sequences) > 0)
+ {
+ List *rels;
+
+ rels = OpenSequenceList(sequences);
+ CheckObjSchemaNotAlreadyInPublication(rels, sequences_schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddTables(puboid, rels, true, NULL);
+ CloseSequenceList(rels);
+ }
+
if (list_length(schemaidlist) > 0)
{
/*
@@ -651,12 +709,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
*/
static void
CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
- List *tables, List *schemaidlist)
+ List *tables, List *tables_schemaidlist,
+ List *sequences, List *sequences_schemaidlist)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
if ((stmt->action == DEFELEM_ADD || stmt->action == DEFELEM_SET) &&
- schemaidlist && !superuser())
+ (tables_schemaidlist || sequences_schemaidlist) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to add or set schemas")));
@@ -665,13 +724,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
* Check that user is allowed to manipulate the publication tables in
* schema
*/
- if (schemaidlist && pubform->puballtables)
+ if (tables_schemaidlist && pubform->puballtables)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables from schema cannot be added to, dropped from, or set on FOR ALL TABLES publications.")));
+ /*
+ * Check that user is allowed to manipulate the publication sequences in
+ * schema
+ */
+ if (sequences_schemaidlist && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
/* Check that user is allowed to manipulate the publication tables. */
if (tables && pubform->puballtables)
ereport(ERROR,
@@ -679,6 +749,108 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (sequences && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+ List *sequences, List *schemaidlist)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /*
+ * It is quite possible that for the SET case user has not specified any
+ * tables in which case we need to remove all the existing tables.
+ */
+ if (!sequences && stmt->action != DEFELEM_SET)
+ return;
+
+ rels = OpenSequenceList(sequences);
+
+ if (stmt->action == DEFELEM_ADD)
+ {
+ List *schemas = NIL;
+
+ /*
+ * Check if the relation is member of the existing schema in the
+ * publication or member of the schema list specified.
+ */
+ schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid));
+ CheckObjSchemaNotAlreadyInPublication(rels, schemas,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddSequences(pubid, rels, false, stmt);
+ }
+ else if (stmt->action == DEFELEM_DROP)
+ PublicationDropSequences(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ PublicationRelInfo *newpubrel;
+
+ newpubrel = (PublicationRelInfo *) lfirst(newlc);
+ if (RelationGetRelid(newpubrel->relation) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+ /* Not yet in the list, open it and add to the list */
+ if (!found)
+ {
+ Relation oldrel;
+ PublicationRelInfo *pubrel;
+
+ /* Wrap relation into PublicationRelInfo */
+ oldrel = table_open(oldrelid, ShareUpdateExclusiveLock);
+
+ pubrel = palloc(sizeof(PublicationRelInfo));
+ pubrel->relation = oldrel;
+
+ delrels = lappend(delrels, pubrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropSequences(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddSequences(pubid, rels, true, stmt);
+
+ CloseSequenceList(delrels);
+ }
+
+ CloseSequenceList(rels);
}
/*
@@ -716,13 +888,21 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
AlterPublicationOptions(pstate, stmt, rel, tup);
else
{
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
- CheckAlterPublication(stmt, tup, relations, schemaidlist);
+ CheckAlterPublication(stmt, tup,
+ tables, tables_schemaidlist,
+ sequences, sequences_schemaidlist);
/*
* Lock the publication so nobody else can do anything with it. This
@@ -747,7 +927,9 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
errmsg("publication \"%s\" does not exist",
stmt->pubname));
- AlterPublicationTables(stmt, tup, relations, schemaidlist);
+ AlterPublicationTables(stmt, tup, tables, tables_schemaidlist);
+ AlterPublicationSequences(stmt, tup, sequences, sequences_schemaidlist);
+
AlterPublicationSchemas(stmt, tup, schemaidlist);
}
@@ -1155,6 +1337,144 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
}
}
+/*
+ * Open relations specified by a PublicationTable list.
+ * In the returned list of PublicationRelInfo, tables are locked
+ * in ShareUpdateExclusiveLock mode in order to add them to a publication.
+ */
+static List *
+OpenSequenceList(List *sequences)
+{
+ List *relids = NIL;
+ List *rels = NIL;
+ ListCell *lc;
+
+ /*
+ * Open, share-lock, and check all the explicitly-specified relations
+ */
+ foreach(lc, sequences)
+ {
+ PublicationTable *s = lfirst_node(PublicationTable, lc);
+ Relation rel;
+ Oid myrelid;
+ PublicationRelInfo *pub_rel;
+
+ /* Allow query cancel in case this takes a long time */
+ CHECK_FOR_INTERRUPTS();
+
+ rel = table_openrv(s->relation, ShareUpdateExclusiveLock);
+ myrelid = RelationGetRelid(rel);
+
+ /*
+ * Filter out duplicates if user specifies "foo, foo".
+ *
+ * Note that this algorithm is known to not be very efficient (O(N^2))
+ * but given that it only works on list of tables given to us by user
+ * it's deemed acceptable.
+ */
+ if (list_member_oid(relids, myrelid))
+ {
+ table_close(rel, ShareUpdateExclusiveLock);
+ continue;
+ }
+
+ pub_rel = palloc(sizeof(PublicationRelInfo));
+ pub_rel->relation = rel;
+ rels = lappend(rels, pub_rel);
+ relids = lappend_oid(relids, myrelid);
+ }
+
+ list_free(relids);
+
+ return rels;
+}
+
+/*
+ * Close all relations in the list.
+ */
+static void
+CloseSequenceList(List *rels)
+{
+ ListCell *lc;
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel;
+
+ pub_rel = (PublicationRelInfo *) lfirst(lc);
+ table_close(pub_rel->relation, NoLock);
+ }
+}
+
+/*
+ * Add listed tables to the publication.
+ */
+static void
+PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt)
+{
+ ListCell *lc;
+
+ Assert(!stmt || !stmt->for_all_sequences);
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel = (PublicationRelInfo *) lfirst(lc);
+ Relation rel = pub_rel->relation;
+ ObjectAddress obj;
+
+ /* Must be owner of the sequence or superuser. */
+ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind),
+ RelationGetRelationName(rel));
+
+ obj = publication_add_relation(pubid, pub_rel, if_not_exists);
+ if (stmt)
+ {
+ EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
+ (Node *) stmt);
+
+ InvokeObjectPostCreateHook(PublicationRelRelationId,
+ obj.objectId, 0);
+ }
+ }
+}
+
+/*
+ * Remove listed sequences from the publication.
+ */
+static void
+PublicationDropSequences(Oid pubid, List *rels, bool missing_ok)
+{
+ ObjectAddress obj;
+ ListCell *lc;
+ Oid prid;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ Oid relid = RelationGetRelid(rel);
+
+ prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
+ ObjectIdGetDatum(relid),
+ ObjectIdGetDatum(pubid));
+ if (!OidIsValid(prid))
+ {
+ if (missing_ok)
+ continue;
+
+ ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+ errmsg("relation \"%s\" is not part of the publication",
+ RelationGetRelationName(rel))));
+ }
+
+ ObjectAddressSet(obj, PublicationRelRelationId, prid);
+ performDeletion(&obj, DROP_CASCADE, 0);
+ }
+}
+
+
/*
* Internal workhorse for changing a publication owner
*/
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index a5435910a2..9a689e6305 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -339,6 +339,85 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*/
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 2b658080fe..fc6374c445 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -85,6 +85,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -496,6 +497,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -534,6 +536,26 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -706,6 +728,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -797,6 +823,183 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1614,6 +1817,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 574d7d27fd..6f0d42d551 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -608,7 +608,7 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index df0b747883..6a5718572f 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4846,6 +4846,7 @@ _copyCreatePublicationStmt(const CreatePublicationStmt *from)
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
COPY_SCALAR_FIELD(for_all_tables);
+ COPY_SCALAR_FIELD(for_all_sequences);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index cb7ddd463c..5dccc0d0ad 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2336,6 +2336,7 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
COMPARE_SCALAR_FIELD(action);
return true;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 3d4dd43e47..f037c17985 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -9762,6 +9762,26 @@ PublicationObjSpec:
$$->pubobjtype = PUBLICATIONOBJ_TABLE_IN_CUR_SCHEMA;
$$->location = @5;
}
+ | SEQUENCE relation_expr
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+ $$->pubtable = makeNode(PublicationTable);
+ $$->pubtable->relation = $2;
+ }
+ | ALL SEQUENCE IN_P SCHEMA ColId
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE_IN_SCHEMA;
+ $$->name = $5;
+ $$->location = @5;
+ }
+ | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE_IN_CUR_SCHEMA;
+ $$->location = @5;
+ }
| ColId
{
$$ = makeNode(PublicationObjSpec);
@@ -10097,6 +10117,12 @@ UnlistenStmt:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
@@ -10105,6 +10131,12 @@ UnlistenStmt:
* BEGIN / COMMIT / ROLLBACK
* (also older versions END / ABORT)
*
+ * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]
+ *
*****************************************************************************/
TransactionStmt:
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 9f5bf4b639..79c6f62638 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -647,6 +647,56 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1203,6 +1253,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index f07983a43c..24369a1522 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -357,6 +358,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -871,6 +878,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ elog(WARNING, "sequence %s info last_value %ld log_cnt %ld is_called %d",
+ quote_qualified_identifier(lrel.nspname, lrel.relname),
+ last_value, log_cnt, is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1106,10 +1206,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 2e79302a48..089d989dd7 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -144,6 +144,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -1091,6 +1092,61 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by ResetSequence2 */
+ elog(WARNING, "locking sequence %d in exclusive mode", relid);
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ elog(WARNING, "applying sequence %s.%s transactional %d last_value %ld log_cnt %ld is_called %d",
+ seq.nspname, seq.seqname, seq.transactional, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2374,6 +2430,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 6f6a203dea..9ab8b1d4ec 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -49,6 +49,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -159,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -175,6 +180,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -188,6 +194,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -195,6 +202,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -260,6 +268,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -856,6 +874,51 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(rel))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, RelationGetRelid(rel));
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ rel,
+ xid,
+ sequence_lsn,
+ transactional,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1140,7 +1203,8 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->streamed_txns = NIL;
entry->replicate_valid = false;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->publish_as_relid = InvalidOid;
entry->map = NULL; /* will be set by maybe_send_schema() if
* needed */
@@ -1160,6 +1224,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
List *schemaPubids = GetSchemaPublications(schemaId);
ListCell *lc;
Oid publish_as_relid = relid;
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1183,12 +1248,23 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1243,10 +1319,12 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
}
if (entry->pubactions.pubinsert && entry->pubactions.pubupdate &&
- entry->pubactions.pubdelete && entry->pubactions.pubtruncate)
+ entry->pubactions.pubdelete && entry->pubactions.pubtruncate &&
+ entry->pubactions.pubsequence)
break;
}
@@ -1391,6 +1469,7 @@ rel_sync_cache_publication_cb(Datum arg, int cacheid, uint32 hashvalue)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
}
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 105d8d4601..1f8634b50a 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5585,6 +5585,7 @@ GetRelationPublicationActions(Relation relation)
pubactions->pubupdate |= pubform->pubupdate;
pubactions->pubdelete |= pubform->pubdelete;
pubactions->pubtruncate |= pubform->pubtruncate;
+ pubactions->pubsequence |= pubform->pubsequence;
ReleaseSysCache(tup);
@@ -5593,7 +5594,8 @@ GetRelationPublicationActions(Relation relation)
* other publications.
*/
if (pubactions->pubinsert && pubactions->pubupdate &&
- pubactions->pubdelete && pubactions->pubtruncate)
+ pubactions->pubdelete && pubactions->pubtruncate &&
+ pubactions->pubsequence)
break;
}
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index cf30239f6d..c12e14e93e 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1647,13 +1647,13 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> DROP */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
- COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY(Query_for_list_of_schemas
" AND nspname != 'pg_catalog' "
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 4d992dc224..349ffe1b51 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11513,6 +11513,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 902f2f2f0d..b44bb0e5e0 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct Publication
@@ -79,6 +89,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -121,6 +132,9 @@ extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
Oid relid);
+extern List *GetAllSequencesPublicationRelations(void);
+extern List *GetPublicationSequenceRelations(Oid pubid);
+
extern bool is_publishable_relation(Relation rel);
extern bool is_schema_publication(Oid pubid);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *targetrel,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 5919fb90ee..c28e8695cb 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 4c5a8a39bf..d32ba498a7 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3653,6 +3653,10 @@ typedef enum PublicationObjSpecType
PUBLICATIONOBJ_TABLE_IN_SCHEMA, /* Tables in schema type */
PUBLICATIONOBJ_TABLE_IN_CUR_SCHEMA, /* Get the first element from
* search_path */
+ PUBLICATIONOBJ_SEQUENCE, /* Sequence type */
+ PUBLICATIONOBJ_SEQUENCE_IN_SCHEMA, /* Sequences in schema type */
+ PUBLICATIONOBJ_SEQUENCE_IN_CUR_SCHEMA, /* Get the first element from
+ * search_path */
PUBLICATIONOBJ_CONTINUATION /* Continuation of previous type */
} PublicationObjSpecType;
@@ -3672,6 +3676,7 @@ typedef struct CreatePublicationStmt
List *options; /* List of DefElem nodes */
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
} CreatePublicationStmt;
typedef struct AlterPublicationStmt
@@ -3688,6 +3693,7 @@ typedef struct AlterPublicationStmt
*/
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
DefElemAction action; /* What action to perform with the
* tables/schemas */
} AlterPublicationStmt;
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 83741dcf42..fc1782c125 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -60,6 +60,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -117,6 +118,18 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -227,6 +240,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index 0dc460fb70..97eb2a7ef1 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -28,6 +28,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index b58b062b10..6eb4f11003 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1451,6 +1451,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gpt(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gpt.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/subscription/t/027_sequences.pl b/src/test/subscription/t/027_sequences.pl
new file mode 100644
index 0000000000..fd94161907
--- /dev/null
+++ b/src/test/subscription/t/027_sequences.pl
@@ -0,0 +1,196 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More tests => 6;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub WITH (slot_name = seq_sub_slot)"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a rolled-back transaction - should not be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the new sequence in a transaction, and roll it back - in this case
+# it should not be replicated at commit
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+$node_subscriber->stop('fast');
+$node_publisher->stop('fast');
--
2.31.1
0006-update-TAP-tests-20211222.patchtext/x-patch; charset=UTF-8; name=0006-update-TAP-tests-20211222.patchDownload
From 82c8457acfc96f93dab92881d75a349921049c8a Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Wed, 22 Dec 2021 03:19:04 +0100
Subject: [PATCH 6/6] update TAP tests
---
src/test/subscription/t/027_sequences.pl | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/src/test/subscription/t/027_sequences.pl b/src/test/subscription/t/027_sequences.pl
index fd94161907..b9c1757a64 100644
--- a/src/test/subscription/t/027_sequences.pl
+++ b/src/test/subscription/t/027_sequences.pl
@@ -70,7 +70,7 @@ my $result = $node_subscriber->safe_psql(
SELECT * FROM s;
));
-is( $result, '132|0|t',
+is( $result, '100|0|t',
'check replicated sequence values on subscriber');
@@ -90,7 +90,7 @@ $result = $node_subscriber->safe_psql(
SELECT * FROM s;
));
-is( $result, '231|0|t',
+is( $result, '200|0|t',
'check replicated sequence values on subscriber');
@@ -144,7 +144,7 @@ $result = $node_subscriber->safe_psql(
SELECT * FROM s2;
));
-is( $result, '132|0|t',
+is( $result, '100|0|t',
'check replicated sequence values on subscriber');
@@ -165,7 +165,7 @@ $result = $node_subscriber->safe_psql(
SELECT * FROM s2;
));
-is( $result, '132|0|t',
+is( $result, '200|0|t',
'check replicated sequence values on subscriber');
@@ -188,7 +188,7 @@ $result = $node_subscriber->safe_psql(
SELECT * FROM s2;
));
-is( $result, '330|0|t',
+is( $result, '300|0|t',
'check replicated sequence values on subscriber');
--
2.31.1
Hi,
Here's a rebased version of the patch series. I decided to go back to
the version from 2021/12/14, which does not include the changes to WAL
logging. So this has the same issue with nextval() now waiting for WAL
flush (and/or sync replica), as demonstrated by occasional failures of
the TAP test in 0003, but my reasoning is this:
1) This is a preexisting issue, affecting sequences in general. It's not
related to this patch, really. The fix will be independent of this, and
there's little reason to block the decoding until that happens.
2) We've discussed a couple ways to fix this in [1]/messages/by-id/712cad46-a9c8-1389-aef8-faf0203c9be9@enterprisedb.com - logging individual
sequence increments, flushing everything right away, waiting for page
LSN, etc. My opinion is we'll use some form of waiting for page LSN, but
no matter what fix we use it'll have almost no impact on this patch.
There might be minor changes to the test, but that's about it.
3) There are ways to stabilize the tests even without that - it's enough
to generate a little bit of WAL / get XID, not just nextval(). But
that's only for 0003 which I don't intend to commit yet, and 0001/0002
have no problems at all.
regards
[1]: /messages/by-id/712cad46-a9c8-1389-aef8-faf0203c9be9@enterprisedb.com
/messages/by-id/712cad46-a9c8-1389-aef8-faf0203c9be9@enterprisedb.com
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Logical-decoding-of-sequences-20220126.patchtext/x-patch; charset=UTF-8; name=0001-Logical-decoding-of-sequences-20220126.patchDownload
From 57f1a508c70654b6b93ba8c19da7a8a4fbb32f9f Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Wed, 26 Jan 2022 02:07:15 +0100
Subject: [PATCH 1/3] Logical decoding of sequences
This adds the infrastructure for logical decoding of sequences. Support
for built-in logical replication and test_decoding is added separately.
When decoding sequences, we differentiate between a sequences created in
a (running) transaction, and sequences created in other (already
committed) transactions. Changes for sequences created in the same
top-level transaction are treated as "transactional" i.e. just like any
other change from that transaction (and discarded in case of a
rollback). Changes for sequences created earlier are treated as not
transactional - are processed immediately, as if performed outside any
transaction (and thus not rolled back).
This mixed behavior is necessary - sequences are non-transactional (e.g.
ROLLBACK does not undo the sequence increments). But for new sequences,
we need to handle them in a transactional way, because if we ever get
some DDL support, the sequence won't exist until the transaction gets
applied. So we need to ensure the increments don't happen until the
sequence gets created.
To differentiate which sequences are "old" and which were created in a
still-running transaction, we track sequences created in running
transactions in a hash table. Each sequence is identified by it's
relfilenode, and at transaction commit we discard all sequences created
in that particular transaction. For each sequence we track the XID of
the (sub)transaction that created it, and we cleanup the sequences for
each subtransaction when it completes (commit/rollback).
We don't use the XID to check if it's the same top-level transaction.
It's enough to know it was created in an in-progress transaction, and we
know it must be the current one because otherwise it wouldn't see the
sequence object.
The main changes in this patch are:
1) ensure WAL-logging of all necessary info for sequence advances
We need to be able to associate the advance with a XID, but until now
sequence advance might have XID 0 if it was the first thing the
transaction did. So we ensure the transaction has XID.
Note: This is needed because of subxacts. A XID 0 might still have the
sequence created in a different subxact of the same top-level xact.
Then there's a "created" flag added to the xl_seq_rec, which is
necessary to differentiate WAL for created/reset sequences.
2) decoding / queueing of sequences into ReorderBuffer
This is mostly copy-paste of existing code for other decoded events.
3) tracking sequences created in in-progress transactions
We use a simple hash table, indexed by relfilenode.
---
doc/src/sgml/logicaldecoding.sgml | 62 ++-
src/backend/commands/sequence.c | 32 ++
src/backend/replication/logical/decode.c | 124 ++++++
src/backend/replication/logical/logical.c | 88 ++++
.../replication/logical/reorderbuffer.c | 400 ++++++++++++++++++
src/include/access/rmgrlist.h | 2 +-
src/include/commands/sequence.h | 1 +
src/include/replication/decode.h | 1 +
src/include/replication/output_plugin.h | 27 ++
src/include/replication/reorderbuffer.h | 43 +-
10 files changed, 775 insertions(+), 5 deletions(-)
diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml
index b6353c7a125..60a2b7d1ccb 100644
--- a/doc/src/sgml/logicaldecoding.sgml
+++ b/doc/src/sgml/logicaldecoding.sgml
@@ -458,6 +458,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
LogicalDecodeFilterPrepareCB filter_prepare_cb;
@@ -472,6 +473,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
@@ -481,9 +483,12 @@ typedef void (*LogicalOutputPluginInit) (struct OutputPluginCallbacks *cb);
and <function>commit_cb</function> callbacks are required,
while <function>startup_cb</function>,
<function>filter_by_origin_cb</function>, <function>truncate_cb</function>,
+ <function>sequence_cb</function>,
and <function>shutdown_cb</function> are optional.
If <function>truncate_cb</function> is not set but a
<command>TRUNCATE</command> is to be decoded, the action will be ignored.
+ Similarly, if <function>sequence_cb</function> is not set and a sequence
+ change is to be decoded, the action will be ignored.
</para>
<para>
@@ -492,7 +497,8 @@ typedef void (*LogicalOutputPluginInit) (struct OutputPluginCallbacks *cb);
<function>stream_stop_cb</function>, <function>stream_abort_cb</function>,
<function>stream_commit_cb</function>, <function>stream_change_cb</function>,
and <function>stream_prepare_cb</function>
- are required, while <function>stream_message_cb</function> and
+ are required, while <function>stream_message_cb</function>,
+ <function>stream_sequence_cb</function>, and
<function>stream_truncate_cb</function> are optional.
</para>
@@ -808,6 +814,35 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
</para>
</sect3>
+ <sect3 id="logicaldecoding-output-plugin-sequence">
+ <title>Sequence Callback</title>
+
+ <para>
+ The optional <function>sequence_cb</function> callback is called for
+ actions that update a sequence value.
+<programlisting>
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+</programlisting>
+ The <parameter>txn</parameter> parameter contains meta information about
+ the transaction the sequence change is part of. Note however that it can
+ be NULL when the sequence change is non-transactional and the XID was not
+ assigned yet in the transaction which updated the sequence value.
+ The <parameter>sequence_lsn</parameter> has WAL location of the sequence
+ update. The <parameter>transactional</parameter> says if the sequence has
+ to be replayed as part of the transaction or directly.
+
+ The <parameter>last_value</parameter>, <parameter>log_cnt</parameter> and
+ <parameter>is_called</parameter> parameters describe the sequence change.
+ </para>
+ </sect3>
+
<sect3 id="logicaldecoding-output-plugin-filter-prepare">
<title>Prepare Filter Callback</title>
@@ -1017,6 +1052,26 @@ typedef void (*LogicalDecodeStreamMessageCB) (struct LogicalDecodingContext *ctx
</para>
</sect3>
+ <sect3 id="logicaldecoding-output-plugin-stream-sequence">
+ <title>Stream Sequence Callback</title>
+ <para>
+ The optional <function>stream_sequence_cb</function> callback is called
+ for actions that change a sequence in a block of streamed changes
+ (demarcated by <function>stream_start_cb</function> and
+ <function>stream_stop_cb</function> calls).
+<programlisting>
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+</programlisting>
+ </para>
+ </sect3>
+
<sect3 id="logicaldecoding-output-plugin-stream-truncate">
<title>Stream Truncate Callback</title>
<para>
@@ -1197,8 +1252,9 @@ OutputPluginWrite(ctx, true);
in-progress transactions. There are multiple required streaming callbacks
(<function>stream_start_cb</function>, <function>stream_stop_cb</function>,
<function>stream_abort_cb</function>, <function>stream_commit_cb</function>
- and <function>stream_change_cb</function>) and two optional callbacks
- (<function>stream_message_cb</function> and <function>stream_truncate_cb</function>).
+ and <function>stream_change_cb</function>) and multiple optional callbacks
+ (<function>stream_message_cb</function>, <function>stream_sequence_cb</function>,
+ and <function>stream_truncate_cb</function>).
Also, if streaming of two-phase commands is to be supported, then additional
callbacks must be provided. (See <xref linkend="logicaldecoding-two-phase-commits"/>
for details).
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 27cb6307581..358fb28b5bd 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -378,8 +378,13 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(rel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
START_CRIT_SECTION();
MarkBufferDirty(buf);
@@ -399,6 +404,7 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = rel->rd_node;
+ xlrec.created = true;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) tuple->t_data, tuple->t_len);
@@ -764,10 +770,28 @@ nextval_internal(Oid relid, bool check_permissions)
* It's sufficient to ensure the toplevel transaction has an xid, no need
* to assign xids subxacts, that'll already trigger an appropriate wait.
* (Have to do that here, so we're outside the critical section)
+ *
+ * We have to ensure we have a proper XID, which will be included in
+ * the XLOG record by XLogRecordAssemble. Otherwise the first nextval()
+ * in a subxact (without any preceding changes) would get XID 0, and it
+ * would then be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence. We only do that with
+ * wal_level=logical, though.
+ *
+ * XXX This might seem unnecessary, because if there's no XID the xact
+ * couldn't have done anything important yet, e.g. it could not have
+ * created a sequence. But that's incorrect, as it ignores subxacts. The
+ * current subtransaction might not have done anything yet (thus no XID),
+ * but an earlier one might have created the sequence.
*/
if (logit && RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -803,6 +827,7 @@ nextval_internal(Oid relid, bool check_permissions)
seq->log_cnt = 0;
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -977,8 +1002,13 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -999,6 +1029,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
+
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 3fb5a92a1a1..f0a00b3ab25 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -42,6 +42,7 @@
#include "replication/reorderbuffer.h"
#include "replication/snapbuild.h"
#include "storage/standby.h"
+#include "commands/sequence.h"
/* individual record(group)'s handlers */
static void DecodeInsert(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
@@ -63,6 +64,7 @@ static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
/* common function to decode tuples */
static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);
+static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);
/* helper functions for decoding transactions */
static inline bool FilterPrepare(LogicalDecodingContext *ctx,
@@ -1250,3 +1252,125 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
(txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
+
+/*
+ * Decode Sequence Tuple
+ */
+static void
+DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
+{
+ int datalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;
+
+ Assert(datalen >= 0);
+
+ tuple->tuple.t_len = datalen + SizeofHeapTupleHeader;;
+
+ ItemPointerSetInvalid(&tuple->tuple.t_self);
+
+ tuple->tuple.t_tableOid = InvalidOid;
+
+ memcpy(((char *) tuple->tuple.t_data),
+ data + sizeof(xl_seq_rec),
+ SizeofHeapTupleHeader);
+
+ memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,
+ data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,
+ datalen);
+}
+
+/*
+ * Handle sequence decode
+ *
+ * Decoding sequences is a bit tricky, because while most sequence actions
+ * are non-transactional (not subject to rollback), some need to be handled
+ * as transactional.
+ *
+ * By default, a sequence increment is non-transactional - we must not queue
+ * it in a transaction as other changes, because the transaction might get
+ * rolled back and we'd discard the increment. The downstream would not be
+ * notified about the increment, which is wrong.
+ *
+ * On the other hand, the sequence may be created in a transaction. In this
+ * case we *should* queue the change as other changes in the transaction,
+ * because we don't want to send the increments for unknown sequence to the
+ * plugin - it might get confused about which sequence it's related to etc.
+ */
+void
+sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ SnapBuild *builder = ctx->snapshot_builder;
+ ReorderBufferTupleBuf *tuplebuf;
+ RelFileNode target_node;
+ XLogReaderState *r = buf->record;
+ char *tupledata = NULL;
+ Size tuplelen;
+ Size datalen = 0;
+ TransactionId xid = XLogRecGetXid(r);
+ uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
+ xl_seq_rec *xlrec;
+ Snapshot snapshot;
+ RepOriginId origin_id = XLogRecGetOrigin(r);
+ bool transactional;
+
+ /* only decode changes flagged with XLOG_SEQ_LOG */
+ if (info != XLOG_SEQ_LOG)
+ elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
+
+ /*
+ * If we don't have snapshot or we are just fast-forwarding, there is no
+ * point in decoding messages.
+ */
+ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||
+ ctx->fast_forward)
+ return;
+
+ /* only interested in our database */
+ XLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);
+ if (target_node.dbNode != ctx->slot->data.database)
+ return;
+
+ /* output plugin doesn't look for this origin, no need to queue */
+ if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
+ return;
+
+ tupledata = XLogRecGetData(r);
+ datalen = XLogRecGetDataLen(r);
+ tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);
+
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_seq_rec *) XLogRecGetData(r);
+
+ /* XXX how could we have sequence change without data? */
+ if(!datalen || !tupledata)
+ return;
+
+ tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
+ DecodeSeqTuple(tupledata, datalen, tuplebuf);
+
+ /*
+ * Should we handle the sequence increment as transactional or not?
+ *
+ * If the sequence was created in a still-running transaction, treat
+ * it as transactional and queue the increments. Otherwise it needs
+ * to be treated as non-transactional, in which case we send it to
+ * the plugin right away.
+ */
+ transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
+ target_node,
+ xlrec->created);
+
+ /* Skip the change if already processed (per the snapshot). */
+ if (transactional &&
+ !SnapBuildProcessChange(builder, xid, buf->origptr))
+ return;
+ else if (!transactional &&
+ (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
+ SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ return;
+
+ /* Queue the increment (or send immediately if not transactional). */
+ snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
+ ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
+ origin_id, target_node, transactional,
+ xlrec->created, tuplebuf);
+}
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index 9bc3a2d8deb..934aa13f2d3 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -73,6 +73,10 @@ static void truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
static void message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
/* streaming callbacks */
static void stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
@@ -90,6 +94,10 @@ static void stream_change_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn
static void stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static void stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[], ReorderBufferChange *change);
@@ -218,6 +226,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->apply_truncate = truncate_cb_wrapper;
ctx->reorder->commit = commit_cb_wrapper;
ctx->reorder->message = message_cb_wrapper;
+ ctx->reorder->sequence = sequence_cb_wrapper;
/*
* To support streaming, we require start/stop/abort/commit/change
@@ -234,6 +243,7 @@ StartupDecodingContext(List *output_plugin_options,
(ctx->callbacks.stream_commit_cb != NULL) ||
(ctx->callbacks.stream_change_cb != NULL) ||
(ctx->callbacks.stream_message_cb != NULL) ||
+ (ctx->callbacks.stream_sequence_cb != NULL) ||
(ctx->callbacks.stream_truncate_cb != NULL);
/*
@@ -251,6 +261,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->stream_commit = stream_commit_cb_wrapper;
ctx->reorder->stream_change = stream_change_cb_wrapper;
ctx->reorder->stream_message = stream_message_cb_wrapper;
+ ctx->reorder->stream_sequence = stream_sequence_cb_wrapper;
ctx->reorder->stream_truncate = stream_truncate_cb_wrapper;
@@ -1203,6 +1214,42 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ if (ctx->callbacks.sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr first_lsn)
@@ -1508,6 +1555,47 @@ stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ /* We're only supposed to call this when streaming is supported. */
+ Assert(ctx->streaming);
+
+ /* this callback is optional */
+ if (ctx->callbacks.stream_sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "stream_sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[],
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 19b2ba2000c..50fd9dc901d 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -77,6 +77,35 @@
* a bit more memory to the oldest subtransactions, because it's likely
* they are the source for the next sequence of changes.
*
+ * When decoding sequences, we differentiate between a sequences created
+ * in a (running) transaction, and sequences created in other (already
+ * committed) transactions. Changes for sequences created in the same
+ * top-level transaction are treated as "transactional" i.e. just like
+ * any other change from that transaction (and discarded in case of a
+ * rollback). Changes for sequences created earlier are treated as not
+ * transactional - are processed immediately, as if performed outside
+ * any transaction (and thus not rolled back).
+ *
+ * This mixed behavior is necessary - sequences are non-transactional
+ * (e.g. ROLLBACK does not undo the sequence increments). But for new
+ * sequences, we need to handle them in a transactional way, because if
+ * we ever get some DDL support, the sequence won't exist until the
+ * transaction gets applied. So we need to ensure the increments don't
+ * happen until the sequence gets created.
+ *
+ * To differentiate which sequences are "old" and which were created
+ * in a still-running transaction, we track sequences created in running
+ * transactions in a hash table. Each sequence is identified by it's
+ * relfilenode, and at transaction commit we discard all sequences
+ * created in that particular transaction. For each sequence we track
+ * the XID of the (sub)transaction that created it, and we cleanup the
+ * sequences for each subtransaction when it completes (commit/rollback).
+ *
+ * We don't use the XID to check if it's the same top-level transaction.
+ * It's enough to know it was created in an in-progress transaction,
+ * and we know it must be the current one because otherwise it wouldn't
+ * see the sequence object.
+ *
* -------------------------------------------------------------------------
*/
#include "postgres.h"
@@ -91,6 +120,7 @@
#include "access/xact.h"
#include "access/xlog_internal.h"
#include "catalog/catalog.h"
+#include "commands/sequence.h"
#include "lib/binaryheap.h"
#include "miscadmin.h"
#include "pgstat.h"
@@ -116,6 +146,13 @@ typedef struct ReorderBufferTXNByIdEnt
ReorderBufferTXN *txn;
} ReorderBufferTXNByIdEnt;
+/* entry for hash table we use to track sequences created in running xacts */
+typedef struct ReorderBufferSequenceEnt
+{
+ RelFileNode rnode;
+ TransactionId xid;
+} ReorderBufferSequenceEnt;
+
/* data structures for (relfilenode, ctid) => (cmin, cmax) mapping */
typedef struct ReorderBufferTupleCidKey
{
@@ -339,6 +376,14 @@ ReorderBufferAllocate(void)
buffer->by_txn = hash_create("ReorderBufferByXid", 1000, &hash_ctl,
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+ /* hash table of sequences, mapping relfilenode to XID of transaction */
+ hash_ctl.keysize = sizeof(RelFileNode);
+ hash_ctl.entrysize = sizeof(ReorderBufferSequenceEnt);
+ hash_ctl.hcxt = buffer->context;
+
+ buffer->sequences = hash_create("ReorderBufferSequenceHash", 1000, &hash_ctl,
+ HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+
buffer->by_txn_last_xid = InvalidTransactionId;
buffer->by_txn_last_txn = NULL;
@@ -525,6 +570,13 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
change->data.truncate.relids = NULL;
}
break;
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
+ change->data.sequence.tuple = NULL;
+ }
+ break;
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
@@ -859,6 +911,230 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
}
}
+/*
+ * Treat the sequence increment as transactional?
+ *
+ * The hash table tracks all sequences created in in-progress transactions,
+ * so we simply do a lookup (the sequence is identified by relfilende). If
+ * we find a match, the increment should be handled as transactional.
+ */
+bool
+ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created)
+{
+ bool found = false;
+
+ if (created)
+ return true;
+
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND,
+ &found);
+
+ return found;
+}
+
+/*
+ * Cleanup sequences created in in-progress transactions.
+ *
+ * There's no way to search by XID, so we simply do a seqscan of all
+ * the entries in the hash table. Hopefully there are only a couple
+ * entries in most cases - people generally don't create many new
+ * sequences over and over.
+ */
+static void
+ReorderBufferSequenceCleanup(ReorderBuffer *rb, TransactionId xid)
+{
+ HASH_SEQ_STATUS scan_status;
+ ReorderBufferSequenceEnt *ent;
+
+ hash_seq_init(&scan_status, rb->sequences);
+ while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
+ {
+ /* skip sequences not from this transaction */
+ if (ent->xid != xid)
+ continue;
+
+ (void) hash_search(rb->sequences,
+ (void *) &(ent->rnode),
+ HASH_REMOVE, NULL);
+ }
+}
+
+/*
+ * A transactional sequence increment is queued to be processed upon commit
+ * and a non-transactional increment gets processed immediately.
+ *
+ * A sequence update may be both transactional and non-transactional. When
+ * created in a running transaction, treat it as transactional and queue
+ * the change in it. Otherwise treat it as non-transactional, so that we
+ * don't forget the increment in case of a rollback.
+ */
+void
+ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf)
+{
+ /*
+ * Change needs to be handled as transactional, because the sequence was
+ * created in a transaction that is still running. In that case all the
+ * changes need to be queued in that transaction, we must not send them
+ * to the downstream until the transaction commits.
+ *
+ * There's a bit of a trouble with subtransactions - we can't queue it
+ * into the subxact, because it might be rolled back and we'd lose the
+ * increment. We need to queue it into the same (sub)xact that created
+ * the sequence, which is why we track the XID in the hash table.
+ */
+ if (transactional)
+ {
+ MemoryContext oldcontext;
+ ReorderBufferChange *change;
+
+ /* lookup sequence by relfilenode */
+ ReorderBufferSequenceEnt *ent;
+ bool found;
+
+ /* transactional changes require a transaction */
+ Assert(xid != InvalidTransactionId);
+
+ /* search the lookup table (we ignore the return value, found is enough) */
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ created ? HASH_ENTER : HASH_FIND,
+ &found);
+
+ /*
+ * If this is the "create" increment, we must not have found any
+ * pre-existing entry in the hash table (i.e. there must not be
+ * any conflicting sequence).
+ */
+ Assert(!(created && found));
+
+ /* But we must have either created or found an existing entry. */
+ Assert(created || found);
+
+ /*
+ * When creating the sequence, remember the XID of the transaction
+ * that created id.
+ */
+ if (created)
+ ent->xid = xid;
+
+ /* XXX Maybe check that we're still in the same top-level xact? */
+
+ /* OK, allocate and queue the change */
+ oldcontext = MemoryContextSwitchTo(rb->context);
+
+ change = ReorderBufferGetChange(rb);
+
+ change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
+ change->origin_id = origin_id;
+
+ memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
+
+ change->data.sequence.tuple = tuplebuf;
+
+ /* add it to the same subxact that created the sequence */
+ ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+ else
+ {
+ /*
+ * This increment is for a sequence that was not created in any
+ * running transaction, so we treat it as non-transactional and
+ * just send it to the output plugin directly.
+ */
+ ReorderBufferTXN *txn = NULL;
+ volatile Snapshot snapshot_now = snapshot;
+ bool using_subtxn;
+
+#ifdef USE_ASSERT_CHECKING
+ /* All "creates" have to be handled as transactional. */
+ Assert(!created);
+
+ /* Make sure the sequence is not in the hash table. */
+ {
+ bool found;
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND, &found);
+ Assert(!found);
+ }
+#endif
+
+ if (xid != InvalidTransactionId)
+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
+
+ /* setup snapshot to allow catalog access */
+ SetupHistoricSnapshot(snapshot_now, NULL);
+
+ /*
+ * Decoding needs access to syscaches et al., which in turn use
+ * heavyweight locks and such. Thus we need to have enough state around to
+ * keep track of those. The easiest way is to simply use a transaction
+ * internally. That also allows us to easily enforce that nothing writes
+ * to the database by checking for xid assignments.
+ *
+ * When we're called via the SQL SRF there's already a transaction
+ * started, so start an explicit subtransaction there.
+ */
+ using_subtxn = IsTransactionOrTransactionBlock();
+
+ PG_TRY();
+ {
+ Relation relation;
+ HeapTuple tuple;
+ Form_pg_sequence_data seq;
+ Oid reloid;
+
+ if (using_subtxn)
+ BeginInternalSubTransaction("sequence");
+ else
+ StartTransactionCommand();
+
+ reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(rnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+ tuple = &tuplebuf->tuple;
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+
+ rb->sequence(rb, txn, lsn, relation, transactional,
+ seq->last_value, seq->log_cnt, seq->is_called);
+
+ RelationClose(relation);
+
+ TeardownHistoricSnapshot(false);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+ }
+ PG_CATCH();
+ {
+ TeardownHistoricSnapshot(true);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+ }
+}
+
/*
* AssertTXNLsnOrder
* Verify LSN ordering of transaction lists in the reorderbuffer
@@ -1535,6 +1811,9 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
&found);
Assert(found);
+ /* Remove sequences created in this transaction (if any). */
+ ReorderBufferSequenceCleanup(rb, txn->xid);
+
/* remove entries spilled to disk */
if (rbtxn_is_serialized(txn))
ReorderBufferRestoreCleanup(rb, txn);
@@ -1950,6 +2229,29 @@ ReorderBufferApplyMessage(ReorderBuffer *rb, ReorderBufferTXN *txn,
change->data.msg.message);
}
+/*
+ * Helper function for ReorderBufferProcessTXN for applying sequences.
+ */
+static inline void
+ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ Relation relation, ReorderBufferChange *change,
+ bool streaming)
+{
+ HeapTuple tuple;
+ Form_pg_sequence_data seq;
+
+ tuple = &change->data.sequence.tuple->tuple;
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+
+ /* Only ever called from ReorderBufferApplySequence, so transational. */
+ if (streaming)
+ rb->stream_sequence(rb, txn, change->lsn, relation, true,
+ seq->last_value, seq->log_cnt, seq->is_called);
+ else
+ rb->sequence(rb, txn, change->lsn, relation, true,
+ seq->last_value, seq->log_cnt, seq->is_called);
+}
+
/*
* Function to store the command id and snapshot at the end of the current
* stream so that we can reuse the same while sending the next stream.
@@ -2392,6 +2694,31 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
elog(ERROR, "tuplecid value in changequeue");
break;
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ Assert(snapshot_now);
+
+ reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
+ change->data.sequence.relnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+
+ if (!RelationIsValid(relation))
+ elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
+ reloid,
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ if (RelationIsLogicallyLogged(relation))
+ ReorderBufferApplySequence(rb, txn, relation, change, streaming);
+
+ RelationClose(relation);
+ break;
}
}
@@ -3776,6 +4103,39 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ char *data;
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
+ /* make sure we have enough space */
+ ReorderBufferSerializeReserve(rb, sz);
+
+ data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
+ /* might have been reallocated above */
+ ondisk = (ReorderBufferDiskChange *) rb->outbuf;
+
+ if (len)
+ {
+ memcpy(data, &tup->tuple, sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ memcpy(data, tup->tuple.t_data, len);
+ data += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4040,6 +4400,22 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4341,6 +4717,30 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ uint32 tuplelen = ((HeapTuple) data)->t_len;
+
+ change->data.sequence.tuple =
+ ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
+
+ /* restore ->tuple */
+ memcpy(&change->data.sequence.tuple->tuple, data,
+ sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ /* reset t_data pointer into the new tuplebuf */
+ change->data.sequence.tuple->tuple.t_data =
+ ReorderBufferTupleBufData(change->data.sequence.tuple);
+
+ /* restore tuple data itself */
+ memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
+ data += tuplelen;
+ }
+ break;
+
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
diff --git a/src/include/access/rmgrlist.h b/src/include/access/rmgrlist.h
index 9a74721c97c..cf8b6d48193 100644
--- a/src/include/access/rmgrlist.h
+++ b/src/include/access/rmgrlist.h
@@ -40,7 +40,7 @@ PG_RMGR(RM_BTREE_ID, "Btree", btree_redo, btree_desc, btree_identify, btree_xlog
PG_RMGR(RM_HASH_ID, "Hash", hash_redo, hash_desc, hash_identify, NULL, NULL, hash_mask, NULL)
PG_RMGR(RM_GIN_ID, "Gin", gin_redo, gin_desc, gin_identify, gin_xlog_startup, gin_xlog_cleanup, gin_mask, NULL)
PG_RMGR(RM_GIST_ID, "Gist", gist_redo, gist_desc, gist_identify, gist_xlog_startup, gist_xlog_cleanup, gist_mask, NULL)
-PG_RMGR(RM_SEQ_ID, "Sequence", seq_redo, seq_desc, seq_identify, NULL, NULL, seq_mask, NULL)
+PG_RMGR(RM_SEQ_ID, "Sequence", seq_redo, seq_desc, seq_identify, NULL, NULL, seq_mask, sequence_decode)
PG_RMGR(RM_SPGIST_ID, "SPGist", spg_redo, spg_desc, spg_identify, spg_xlog_startup, spg_xlog_cleanup, spg_mask, NULL)
PG_RMGR(RM_BRIN_ID, "BRIN", brin_redo, brin_desc, brin_identify, NULL, NULL, brin_mask, NULL)
PG_RMGR(RM_COMMIT_TS_ID, "CommitTs", commit_ts_redo, commit_ts_desc, commit_ts_identify, NULL, NULL, NULL, NULL)
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index f7542f2bccb..b9015a6f710 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -48,6 +48,7 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
typedef struct xl_seq_rec
{
RelFileNode node;
+ bool created; /* is this a CREATE SEQUENCE */
/* SEQUENCE TUPLE DATA FOLLOWS AT THE END */
} xl_seq_rec;
diff --git a/src/include/replication/decode.h b/src/include/replication/decode.h
index a33c2a718a7..8e07bb7409a 100644
--- a/src/include/replication/decode.h
+++ b/src/include/replication/decode.h
@@ -27,6 +27,7 @@ extern void heap2_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
extern void xact_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
extern void standby_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
extern void logicalmsg_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
+extern void sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
extern void LogicalDecodingProcessRecord(LogicalDecodingContext *ctx,
XLogReaderState *record);
diff --git a/src/include/replication/output_plugin.h b/src/include/replication/output_plugin.h
index 41157fda7cc..a16bebf76ca 100644
--- a/src/include/replication/output_plugin.h
+++ b/src/include/replication/output_plugin.h
@@ -88,6 +88,18 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
Size message_size,
const char *message);
+/*
+ * Called for the generic logical decoding sequences.
+ */
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Filter changes by origin.
*/
@@ -199,6 +211,19 @@ typedef void (*LogicalDecodeStreamMessageCB) (struct LogicalDecodingContext *ctx
Size message_size,
const char *message);
+/*
+ * Called for the streaming generic logical decoding sequences from in-progress
+ * transactions.
+ */
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Callback for streaming truncates from in-progress transactions.
*/
@@ -219,6 +244,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
@@ -237,6 +263,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index aa0a73382f6..859424bbd9b 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -64,7 +64,8 @@ enum ReorderBufferChangeType
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT,
- REORDER_BUFFER_CHANGE_TRUNCATE
+ REORDER_BUFFER_CHANGE_TRUNCATE,
+ REORDER_BUFFER_CHANGE_SEQUENCE
};
/* forward declaration */
@@ -158,6 +159,13 @@ typedef struct ReorderBufferChange
uint32 ninvalidations; /* Number of messages */
SharedInvalidationMessage *invalidations; /* invalidation message */
} inval;
+
+ /* Context data for Sequence changes */
+ struct
+ {
+ RelFileNode relnode;
+ ReorderBufferTupleBuf *tuple;
+ } sequence;
} data;
/*
@@ -430,6 +438,15 @@ typedef void (*ReorderBufferMessageCB) (ReorderBuffer *rb,
const char *prefix, Size sz,
const char *message);
+/* sequence callback signature */
+typedef void (*ReorderBufferSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* begin prepare callback signature */
typedef void (*ReorderBufferBeginPrepareCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn);
@@ -496,6 +513,15 @@ typedef void (*ReorderBufferStreamMessageCB) (
const char *prefix, Size sz,
const char *message);
+/* stream sequence callback signature */
+typedef void (*ReorderBufferStreamSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* stream truncate callback signature */
typedef void (*ReorderBufferStreamTruncateCB) (
ReorderBuffer *rb,
@@ -511,6 +537,12 @@ struct ReorderBuffer
*/
HTAB *by_txn;
+ /*
+ * relfilenode => XID lookup table for sequences created in a transaction
+ * (also includes altered sequences, which assigns new relfilenode)
+ */
+ HTAB *sequences;
+
/*
* Transactions that could be a toplevel xact, ordered by LSN of the first
* record bearing that xid.
@@ -541,6 +573,7 @@ struct ReorderBuffer
ReorderBufferApplyTruncateCB apply_truncate;
ReorderBufferCommitCB commit;
ReorderBufferMessageCB message;
+ ReorderBufferSequenceCB sequence;
/*
* Callbacks to be called when streaming a transaction at prepare time.
@@ -560,6 +593,7 @@ struct ReorderBuffer
ReorderBufferStreamCommitCB stream_commit;
ReorderBufferStreamChangeCB stream_change;
ReorderBufferStreamMessageCB stream_message;
+ ReorderBufferStreamSequenceCB stream_sequence;
ReorderBufferStreamTruncateCB stream_truncate;
/*
@@ -635,6 +669,10 @@ void ReorderBufferQueueChange(ReorderBuffer *, TransactionId,
void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapshot, XLogRecPtr lsn,
bool transactional, const char *prefix,
Size message_size, const char *message);
+void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
@@ -682,4 +720,7 @@ void ReorderBufferSetRestartPoint(ReorderBuffer *, XLogRecPtr ptr);
void StartupReorderBuffer(void);
+bool ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created);
+
#endif
--
2.31.1
0002-Add-support-for-decoding-sequences-to-test_-20220126.patchtext/x-patch; charset=UTF-8; name=0002-Add-support-for-decoding-sequences-to-test_-20220126.patchDownload
From 0d31449dbd7464d89a0f5b103d5f1a0a3ac0556c Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 24 Sep 2021 00:42:04 +0200
Subject: [PATCH 2/3] Add support for decoding sequences to test_decoding
---
contrib/test_decoding/Makefile | 3 +-
contrib/test_decoding/expected/sequence.out | 327 ++++++++++++++++++++
contrib/test_decoding/sql/sequence.sql | 119 +++++++
contrib/test_decoding/test_decoding.c | 69 +++++
4 files changed, 517 insertions(+), 1 deletion(-)
create mode 100644 contrib/test_decoding/expected/sequence.out
create mode 100644 contrib/test_decoding/sql/sequence.sql
diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index 9a31e0b8795..56ddc3abaeb 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -5,7 +5,8 @@ PGFILEDESC = "test_decoding - example of a logical decoding output plugin"
REGRESS = ddl xact rewrite toast permissions decoding_in_xact \
decoding_into_rel binary prepared replorigin time messages \
- spill slot truncate stream stats twophase twophase_stream
+ spill slot truncate stream stats twophase twophase_stream \
+ sequence
ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \
oldest_xmin snapshot_transfer subxact_without_top concurrent_stream \
twophase_snapshot
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
new file mode 100644
index 00000000000..c3c2a18a35b
--- /dev/null
+++ b/contrib/test_decoding/expected/sequence.out
@@ -0,0 +1,327 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+ ?column?
+----------
+ init
+(1 row)
+
+CREATE SEQUENCE test_sequence;
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4
+(1 row)
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 14
+(1 row)
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4000
+(1 row)
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, true);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, false);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3500
+(1 row)
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+--------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:1 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value:33 log_cnt:0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:4 log_cnt:0 is_called:1
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value:334 log_cnt:0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:14 log_cnt:32 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:14 log_cnt:0 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:4000 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value:4320 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3500 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3500 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3500 log_cnt:0 is_called:0
+ sequence public.test_sequence: transactional:0 last_value:3820 log_cnt:0 is_called:1
+(24 rows)
+
+DROP SEQUENCE test_sequence;
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3002
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3003
+(1 row)
+
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 6001
+(1 row)
+
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3003 | 30 | t
+(1 row)
+
+SELECT nextval('test_table_a_seq');
+ nextval
+---------
+ 3004
+(1 row)
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+-----------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 last_value:1 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_table_a_seq: transactional:0 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value:3000 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value:3033 log_cnt:0 is_called:1
+ BEGIN
+ COMMIT
+(8 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+---------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 last_value:1 log_cnt:0 is_called:0
+ sequence public.test_table_a_seq: transactional:1 last_value:33 log_cnt:0 is_called:1
+ table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ COMMIT
+(6 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ setval
+--------
+ 5000
+(1 row)
+
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3001 | 32 | t
+(1 row)
+
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+--------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:1 log_cnt:0 is_called:0
+ sequence public.test_sequence: transactional:1 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 last_value:3000 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 last_value:3033 log_cnt:0 is_called:1
+ COMMIT
+(6 rows)
+
+SELECT pg_drop_replication_slot('regression_slot');
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
new file mode 100644
index 00000000000..21c4b792224
--- /dev/null
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -0,0 +1,119 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+
+CREATE SEQUENCE test_sequence;
+
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, true);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, false);
+SELECT nextval('test_sequence');
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+DROP SEQUENCE test_sequence;
+
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+SELECT nextval('test_table_a_seq');
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index 42fe91a2f9c..c25a4eb6a89 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -35,6 +35,7 @@ typedef struct
bool include_timestamp;
bool skip_empty_xacts;
bool only_local;
+ bool skip_sequences;
} TestDecodingData;
/*
@@ -76,6 +77,10 @@ static void pg_decode_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
TransactionId xid,
const char *gid);
@@ -116,6 +121,10 @@ static void pg_decode_stream_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_stream_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static void pg_decode_stream_truncate(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
int nrelations, Relation relations[],
@@ -141,6 +150,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->filter_by_origin_cb = pg_decode_filter;
cb->shutdown_cb = pg_decode_shutdown;
cb->message_cb = pg_decode_message;
+ cb->sequence_cb = pg_decode_sequence;
cb->filter_prepare_cb = pg_decode_filter_prepare;
cb->begin_prepare_cb = pg_decode_begin_prepare_txn;
cb->prepare_cb = pg_decode_prepare_txn;
@@ -153,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pg_decode_stream_commit;
cb->stream_change_cb = pg_decode_stream_change;
cb->stream_message_cb = pg_decode_stream_message;
+ cb->stream_sequence_cb = pg_decode_stream_sequence;
cb->stream_truncate_cb = pg_decode_stream_truncate;
}
@@ -175,6 +186,9 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
data->skip_empty_xacts = false;
data->only_local = false;
+ /* skip sequences by default for backwards compatibility */
+ data->skip_sequences = true;
+
ctx->output_plugin_private = data;
opt->output_type = OUTPUT_PLUGIN_TEXTUAL_OUTPUT;
@@ -265,6 +279,17 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "skip-sequences") == 0)
+ {
+
+ if (elem->arg == NULL)
+ continue; /* true by default */
+ else if (!parse_bool(strVal(elem->arg), &data->skip_sequences))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not parse value \"%s\" for parameter \"%s\"",
+ strVal(elem->arg), elem->defname)));
+ }
else
{
ereport(ERROR,
@@ -744,6 +769,28 @@ pg_decode_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d last_value:%zu log_cnt:%zu is_called:%d",
+ transactional, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
static void
pg_decode_stream_start(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn)
@@ -943,6 +990,28 @@ pg_decode_stream_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_stream_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "streaming sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d last_value:%zu log_cnt:%zu is_called:%d",
+ transactional, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* In streaming mode, we don't display the detailed information of Truncate.
* See pg_decode_stream_change.
--
2.31.1
0003-Add-support-for-decoding-sequences-to-built-20220126.patchtext/x-patch; charset=UTF-8; name=0003-Add-support-for-decoding-sequences-to-built-20220126.patchDownload
From 22ca9c7585b93767294f9e10bd232e5244690a59 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Wed, 26 Jan 2022 02:48:21 +0100
Subject: [PATCH 3/3] Add support for decoding sequences to built-in
replication
---
doc/src/sgml/catalogs.sgml | 71 ++++
doc/src/sgml/ref/alter_publication.sgml | 24 +-
doc/src/sgml/ref/alter_subscription.sgml | 4 +-
src/backend/catalog/pg_publication.c | 149 ++++++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 350 +++++++++++++++++++-
src/backend/commands/sequence.c | 79 +++++
src/backend/commands/subscriptioncmds.c | 272 +++++++++++++++
src/backend/executor/execReplication.c | 2 +-
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 1 +
src/backend/parser/gram.y | 32 ++
src/backend/replication/logical/proto.c | 52 +++
src/backend/replication/logical/tablesync.c | 118 ++++++-
src/backend/replication/logical/worker.c | 60 ++++
src/backend/replication/pgoutput/pgoutput.c | 85 ++++-
src/backend/utils/cache/relcache.c | 4 +-
src/bin/psql/tab-complete.c | 6 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 14 +
src/include/commands/sequence.h | 1 +
src/include/nodes/parsenodes.h | 6 +
src/include/replication/logicalproto.h | 19 ++
src/include/replication/pgoutput.h | 1 +
src/test/regress/expected/rules.out | 8 +
src/test/subscription/t/027_sequences.pl | 196 +++++++++++
26 files changed, 1538 insertions(+), 32 deletions(-)
create mode 100644 src/test/subscription/t/027_sequences.pl
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 1e65c426b28..d0efcc2c9d8 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -9527,6 +9527,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11362,6 +11367,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index 7c7c27bf7ce..ca1ac3a9960 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCE IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -56,7 +58,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -123,6 +136,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 0b027cc3462..8f28cf03f40 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -147,7 +147,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ replication of tables and sequences that were added to the subscribed-to publications
since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -164,7 +164,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
Specifies whether to copy pre-existing data in the publications
that are being subscribed to when the replication starts.
The default is <literal>true</literal>. (Previously-subscribed
- tables are not copied.)
+ tables and sequences are not copied.)
</para>
</listitem>
</varlistentry>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index e14ca2f5630..1a9e05ba98b 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -54,7 +54,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -131,7 +132,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -503,6 +505,11 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ /* skip sequences here */
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ continue;
+
result = GetPubPartitionOptionRelations(result, pub_partopt,
pubrel->prrelid);
}
@@ -517,6 +524,49 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of relation oids for a publication (sequences only).
+ *
+ * This should only be used for normal publications, the FOR ALL TABLES
+ * should use GetAllSequencesPublicationRelations().
+ */
+List *
+GetPublicationSequenceRelations(Oid pubid)
+{
+ List *result;
+ Relation pubrelsrel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications associated with the relation. */
+ pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_rel_prpubid,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(pubid));
+
+ scan = systable_beginscan(pubrelsrel, PublicationRelPrrelidPrpubidIndexId,
+ true, NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Form_pg_publication_rel pubrel;
+
+ pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ }
+
+ systable_endscan(scan);
+ table_close(pubrelsrel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of publication oids for publications marked as FOR ALL TABLES.
*/
@@ -762,6 +812,46 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -784,10 +874,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -937,3 +1029,56 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ sequences = GetPublicationSequenceRelations(publication->oid);
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 3cb69b1f87b..b5cc33aca34 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -374,6 +374,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPT.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0e4bb97fb73..3bc2e8ccb66 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -59,6 +60,12 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
AlterPublicationStmt *stmt);
static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static List *OpenSequenceList(List *sequences);
+static void CloseSequenceList(List *rels);
+static void PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt);
+static void PublicationDropSequences(Oid pubid, List *rels, bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
List *options,
@@ -77,6 +84,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -101,6 +109,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -123,6 +132,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -149,7 +160,9 @@ parse_publication_options(ParseState *pstate,
*/
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas,
+ List **schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -167,12 +180,23 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLES_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA:
+ schemaid = get_namespace_oid(pubobj->name, false);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA:
@@ -186,6 +210,21 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
list_free(search_path);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA:
+ search_path = fetch_search_path(false);
+ if (search_path == NIL) /* nothing valid in search_path? */
+ ereport(ERROR,
+ errcode(ERRCODE_UNDEFINED_SCHEMA),
+ errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+ schemaid = linitial_oid(search_path);
+ list_free(search_path);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
default:
@@ -251,7 +290,10 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
bool publish_via_partition_root_given;
bool publish_via_partition_root;
AclResult aclresult;
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
/* must have CREATE privilege on database */
@@ -306,6 +348,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -330,26 +374,40 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
}
else
{
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
- if (list_length(schemaidlist) > 0 && !superuser())
+ if (list_length(tables_schemaidlist) > 0 && !superuser())
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES IN SCHEMA publication"));
- if (list_length(relations) > 0)
+ if (list_length(tables) > 0)
{
List *rels;
- rels = OpenTableList(relations);
- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ rels = OpenTableList(tables);
+ CheckObjSchemaNotAlreadyInPublication(rels, tables_schemaidlist,
PUBLICATIONOBJ_TABLE);
PublicationAddTables(puboid, rels, true, NULL);
CloseTableList(rels);
}
+ if (list_length(sequences) > 0)
+ {
+ List *rels;
+
+ rels = OpenSequenceList(sequences);
+ CheckObjSchemaNotAlreadyInPublication(rels, sequences_schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddTables(puboid, rels, true, NULL);
+ CloseSequenceList(rels);
+ }
+
if (list_length(schemaidlist) > 0)
{
/*
@@ -653,12 +711,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
*/
static void
CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
- List *tables, List *schemaidlist)
+ List *tables, List *tables_schemaidlist,
+ List *sequences, List *sequences_schemaidlist)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
- schemaidlist && !superuser())
+ (tables_schemaidlist || sequences_schemaidlist) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to add or set schemas")));
@@ -667,13 +726,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
* Check that user is allowed to manipulate the publication tables in
* schema
*/
- if (schemaidlist && pubform->puballtables)
+ if (tables_schemaidlist && pubform->puballtables)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables from schema cannot be added to, dropped from, or set on FOR ALL TABLES publications.")));
+ /*
+ * Check that user is allowed to manipulate the publication sequences in
+ * schema
+ */
+ if (sequences_schemaidlist && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
/* Check that user is allowed to manipulate the publication tables. */
if (tables && pubform->puballtables)
ereport(ERROR,
@@ -681,6 +751,108 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (sequences && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+ List *sequences, List *schemaidlist)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /*
+ * It is quite possible that for the SET case user has not specified any
+ * tables in which case we need to remove all the existing tables.
+ */
+ if (!sequences && stmt->action != AP_SetObjects)
+ return;
+
+ rels = OpenSequenceList(sequences);
+
+ if (stmt->action == AP_AddObjects)
+ {
+ List *schemas = NIL;
+
+ /*
+ * Check if the relation is member of the existing schema in the
+ * publication or member of the schema list specified.
+ */
+ schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid));
+ CheckObjSchemaNotAlreadyInPublication(rels, schemas,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddSequences(pubid, rels, false, stmt);
+ }
+ else if (stmt->action == AP_DropObjects)
+ PublicationDropSequences(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ PublicationRelInfo *newpubrel;
+
+ newpubrel = (PublicationRelInfo *) lfirst(newlc);
+ if (RelationGetRelid(newpubrel->relation) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+ /* Not yet in the list, open it and add to the list */
+ if (!found)
+ {
+ Relation oldrel;
+ PublicationRelInfo *pubrel;
+
+ /* Wrap relation into PublicationRelInfo */
+ oldrel = table_open(oldrelid, ShareUpdateExclusiveLock);
+
+ pubrel = palloc(sizeof(PublicationRelInfo));
+ pubrel->relation = oldrel;
+
+ delrels = lappend(delrels, pubrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropSequences(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddSequences(pubid, rels, true, stmt);
+
+ CloseSequenceList(delrels);
+ }
+
+ CloseSequenceList(rels);
}
/*
@@ -718,13 +890,21 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
AlterPublicationOptions(pstate, stmt, rel, tup);
else
{
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
- CheckAlterPublication(stmt, tup, relations, schemaidlist);
+ CheckAlterPublication(stmt, tup,
+ tables, tables_schemaidlist,
+ sequences, sequences_schemaidlist);
/*
* Lock the publication so nobody else can do anything with it. This
@@ -749,7 +929,9 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
errmsg("publication \"%s\" does not exist",
stmt->pubname));
- AlterPublicationTables(stmt, tup, relations, schemaidlist);
+ AlterPublicationTables(stmt, tup, tables, tables_schemaidlist);
+ AlterPublicationSequences(stmt, tup, sequences, sequences_schemaidlist);
+
AlterPublicationSchemas(stmt, tup, schemaidlist);
}
@@ -1157,6 +1339,144 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
}
}
+/*
+ * Open relations specified by a PublicationTable list.
+ * In the returned list of PublicationRelInfo, tables are locked
+ * in ShareUpdateExclusiveLock mode in order to add them to a publication.
+ */
+static List *
+OpenSequenceList(List *sequences)
+{
+ List *relids = NIL;
+ List *rels = NIL;
+ ListCell *lc;
+
+ /*
+ * Open, share-lock, and check all the explicitly-specified relations
+ */
+ foreach(lc, sequences)
+ {
+ PublicationTable *s = lfirst_node(PublicationTable, lc);
+ Relation rel;
+ Oid myrelid;
+ PublicationRelInfo *pub_rel;
+
+ /* Allow query cancel in case this takes a long time */
+ CHECK_FOR_INTERRUPTS();
+
+ rel = table_openrv(s->relation, ShareUpdateExclusiveLock);
+ myrelid = RelationGetRelid(rel);
+
+ /*
+ * Filter out duplicates if user specifies "foo, foo".
+ *
+ * Note that this algorithm is known to not be very efficient (O(N^2))
+ * but given that it only works on list of tables given to us by user
+ * it's deemed acceptable.
+ */
+ if (list_member_oid(relids, myrelid))
+ {
+ table_close(rel, ShareUpdateExclusiveLock);
+ continue;
+ }
+
+ pub_rel = palloc(sizeof(PublicationRelInfo));
+ pub_rel->relation = rel;
+ rels = lappend(rels, pub_rel);
+ relids = lappend_oid(relids, myrelid);
+ }
+
+ list_free(relids);
+
+ return rels;
+}
+
+/*
+ * Close all relations in the list.
+ */
+static void
+CloseSequenceList(List *rels)
+{
+ ListCell *lc;
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel;
+
+ pub_rel = (PublicationRelInfo *) lfirst(lc);
+ table_close(pub_rel->relation, NoLock);
+ }
+}
+
+/*
+ * Add listed tables to the publication.
+ */
+static void
+PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt)
+{
+ ListCell *lc;
+
+ Assert(!stmt || !stmt->for_all_sequences);
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel = (PublicationRelInfo *) lfirst(lc);
+ Relation rel = pub_rel->relation;
+ ObjectAddress obj;
+
+ /* Must be owner of the sequence or superuser. */
+ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind),
+ RelationGetRelationName(rel));
+
+ obj = publication_add_relation(pubid, pub_rel, if_not_exists);
+ if (stmt)
+ {
+ EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
+ (Node *) stmt);
+
+ InvokeObjectPostCreateHook(PublicationRelRelationId,
+ obj.objectId, 0);
+ }
+ }
+}
+
+/*
+ * Remove listed sequences from the publication.
+ */
+static void
+PublicationDropSequences(Oid pubid, List *rels, bool missing_ok)
+{
+ ObjectAddress obj;
+ ListCell *lc;
+ Oid prid;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ Oid relid = RelationGetRelid(rel);
+
+ prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
+ ObjectIdGetDatum(relid),
+ ObjectIdGetDatum(pubid));
+ if (!OidIsValid(prid))
+ {
+ if (missing_ok)
+ continue;
+
+ ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+ errmsg("relation \"%s\" is not part of the publication",
+ RelationGetRelationName(rel))));
+ }
+
+ ObjectAddressSet(obj, PublicationRelRelationId, prid);
+ performDeletion(&obj, DROP_CASCADE, 0);
+ }
+}
+
+
/*
* Internal workhorse for changing a publication owner
*/
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 358fb28b5bd..ef155e24db4 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -336,6 +336,85 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*/
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 3ef6607d246..5beb67e7652 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -85,6 +85,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -496,6 +497,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -534,6 +536,26 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -706,6 +728,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -797,6 +823,183 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1616,6 +1819,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 313c87398b2..78f14119d98 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -608,7 +608,7 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 90b5da51c95..64d64d338d3 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4850,6 +4850,7 @@ _copyCreatePublicationStmt(const CreatePublicationStmt *from)
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
COPY_SCALAR_FIELD(for_all_tables);
+ COPY_SCALAR_FIELD(for_all_sequences);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 06345da3ba8..0ad40ebe4a8 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2336,6 +2336,7 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
COMPARE_SCALAR_FIELD(action);
return true;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index b5966712ce1..83841e1eb9c 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -9760,6 +9760,26 @@ PublicationObjSpec:
$$->pubobjtype = PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA;
$$->location = @5;
}
+ | SEQUENCE relation_expr
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+ $$->pubtable = makeNode(PublicationTable);
+ $$->pubtable->relation = $2;
+ }
+ | ALL SEQUENCE IN_P SCHEMA ColId
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA;
+ $$->name = $5;
+ $$->location = @5;
+ }
+ | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA;
+ $$->location = @5;
+ }
| ColId
{
$$ = makeNode(PublicationObjSpec);
@@ -10095,6 +10115,12 @@ UnlistenStmt:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
@@ -10103,6 +10129,12 @@ UnlistenStmt:
* BEGIN / COMMIT / ROLLBACK
* (also older versions END / ABORT)
*
+ * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]
+ *
*****************************************************************************/
TransactionStmt:
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 953942692ce..e8ead1387ae 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -647,6 +647,56 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1203,6 +1253,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e596b69d466..c85867ee46b 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -359,6 +360,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -873,6 +880,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ elog(WARNING, "sequence %s info last_value %ld log_cnt %ld is_called %d",
+ quote_qualified_identifier(lrel.nspname, lrel.relname),
+ last_value, log_cnt, is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1134,10 +1234,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index d77bb32bb9e..68708d3907f 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -144,6 +144,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -1093,6 +1094,61 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by ResetSequence2 */
+ elog(WARNING, "locking sequence %d in exclusive mode", relid);
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ elog(WARNING, "applying sequence %s.%s transactional %d last_value %ld log_cnt %ld is_called %d",
+ seq.nspname, seq.seqname, seq.transactional, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2421,6 +2477,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index af8d51aee99..03a41d7ce07 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -49,6 +49,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -159,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -175,6 +180,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -188,6 +194,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -195,6 +202,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -260,6 +268,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -856,6 +874,51 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(rel))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, RelationGetRelid(rel));
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ rel,
+ xid,
+ sequence_lsn,
+ transactional,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1138,7 +1201,8 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->streamed_txns = NIL;
entry->replicate_valid = false;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->publish_as_relid = InvalidOid;
entry->map = NULL; /* will be set by maybe_send_schema() if
* needed */
@@ -1160,6 +1224,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Oid publish_as_relid = relid;
bool am_partition = get_rel_relispartition(relid);
char relkind = get_rel_relkind(relid);
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1183,12 +1248,23 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1243,10 +1319,12 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
}
if (entry->pubactions.pubinsert && entry->pubactions.pubupdate &&
- entry->pubactions.pubdelete && entry->pubactions.pubtruncate)
+ entry->pubactions.pubdelete && entry->pubactions.pubtruncate &&
+ entry->pubactions.pubsequence)
break;
}
@@ -1391,6 +1469,7 @@ rel_sync_cache_publication_cb(Datum arg, int cacheid, uint32 hashvalue)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
}
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 2e760e8a3bd..28fb78eb690 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5585,6 +5585,7 @@ GetRelationPublicationActions(Relation relation)
pubactions->pubupdate |= pubform->pubupdate;
pubactions->pubdelete |= pubform->pubdelete;
pubactions->pubtruncate |= pubform->pubtruncate;
+ pubactions->pubsequence |= pubform->pubsequence;
ReleaseSysCache(tup);
@@ -5593,7 +5594,8 @@ GetRelationPublicationActions(Relation relation)
* other publications.
*/
if (pubactions->pubinsert && pubactions->pubupdate &&
- pubactions->pubdelete && pubactions->pubtruncate)
+ pubactions->pubdelete && pubactions->pubtruncate &&
+ pubactions->pubsequence)
break;
}
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 502b5c57515..888aeca20b6 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1691,13 +1691,13 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> DROP */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
- COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY(Query_for_list_of_schemas
" AND nspname != 'pg_catalog' "
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 0859dc81cac..ea86c6e3933 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11529,6 +11529,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 841b9b6c253..e56286772f4 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct Publication
@@ -79,6 +89,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -121,6 +132,9 @@ extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
Oid relid);
+extern List *GetAllSequencesPublicationRelations(void);
+extern List *GetPublicationSequenceRelations(Oid pubid);
+
extern bool is_publishable_relation(Relation rel);
extern bool is_schema_publication(Oid pubid);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *targetrel,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index b9015a6f710..b7ea1dae9c4 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3e9bdc781f9..6dcb01b036a 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3654,6 +3654,10 @@ typedef enum PublicationObjSpecType
PUBLICATIONOBJ_TABLES_IN_SCHEMA, /* All tables in schema */
PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA, /* All tables in first element of
* search_path */
+ PUBLICATIONOBJ_SEQUENCE, /* Sequence type */
+ PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA, /* Sequences in schema type */
+ PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA, /* Get the first element of
+ * search_path */
PUBLICATIONOBJ_CONTINUATION /* Continuation of previous type */
} PublicationObjSpecType;
@@ -3673,6 +3677,7 @@ typedef struct CreatePublicationStmt
List *options; /* List of DefElem nodes */
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
} CreatePublicationStmt;
typedef enum AlterPublicationAction
@@ -3696,6 +3701,7 @@ typedef struct AlterPublicationStmt
*/
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
AlterPublicationAction action; /* What action to perform with the given
* objects */
} AlterPublicationStmt;
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 22fffaca62d..f4bcfbdd92f 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -60,6 +60,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -117,6 +118,18 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -227,6 +240,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index 78aa9151ef5..a6f6843ada6 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -28,6 +28,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index d652f7b5fb4..dba7fadd038 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1451,6 +1451,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gpt(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gpt.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/subscription/t/027_sequences.pl b/src/test/subscription/t/027_sequences.pl
new file mode 100644
index 00000000000..fd941619071
--- /dev/null
+++ b/src/test/subscription/t/027_sequences.pl
@@ -0,0 +1,196 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More tests => 6;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub WITH (slot_name = seq_sub_slot)"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a rolled-back transaction - should not be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the new sequence in a transaction, and roll it back - in this case
+# it should not be replicated at commit
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+$node_subscriber->stop('fast');
+$node_publisher->stop('fast');
--
2.31.1
On 26.01.22 03:16, Tomas Vondra wrote:
Here's a rebased version of the patch series. I decided to go back to
the version from 2021/12/14, which does not include the changes to WAL
logging.
I notice that test_decoding still has skip-sequences enabled by default,
"for backward compatibility". I think we had concluded in previous
discussions that we don't need that. I would remove the option altogether.
I would not remove it altogether, there is plenty of consumers of this extension's output in the wild (even if I think it's unfortunate) that might not be interested in sequences, but changing it to off by default certainly makes sense.
--
Petr Jelinek
Show quoted text
On 26. 1. 2022, at 11:09, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:
On 26.01.22 03:16, Tomas Vondra wrote:
Here's a rebased version of the patch series. I decided to go back to the version from 2021/12/14, which does not include the changes to WAL logging.
I notice that test_decoding still has skip-sequences enabled by default, "for backward compatibility". I think we had concluded in previous discussions that we don't need that. I would remove the option altogether.
On 1/26/22 14:01, Petr Jelinek wrote:
I would not remove it altogether, there is plenty of consumers of
this extension's output in the wild (even if I think it's
unfortunate) that might not be interested in sequences, but changing
it to off by default certainly makes sense.
Indeed. Attached is an updated patch series, with 0003 switching it to
false by default (and fixing the fallout). For commit I'll merge that
into 0002, of course.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Logical-decoding-of-sequences-20220127.patchtext/x-patch; charset=UTF-8; name=0001-Logical-decoding-of-sequences-20220127.patchDownload
From 32ced840dd45c45c570cbb004378ab09edbf929d Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Wed, 26 Jan 2022 02:07:15 +0100
Subject: [PATCH 1/4] Logical decoding of sequences
This adds the infrastructure for logical decoding of sequences. Support
for built-in logical replication and test_decoding is added separately.
When decoding sequences, we differentiate between a sequences created in
a (running) transaction, and sequences created in other (already
committed) transactions. Changes for sequences created in the same
top-level transaction are treated as "transactional" i.e. just like any
other change from that transaction (and discarded in case of a
rollback). Changes for sequences created earlier are treated as not
transactional - are processed immediately, as if performed outside any
transaction (and thus not rolled back).
This mixed behavior is necessary - sequences are non-transactional (e.g.
ROLLBACK does not undo the sequence increments). But for new sequences,
we need to handle them in a transactional way, because if we ever get
some DDL support, the sequence won't exist until the transaction gets
applied. So we need to ensure the increments don't happen until the
sequence gets created.
To differentiate which sequences are "old" and which were created in a
still-running transaction, we track sequences created in running
transactions in a hash table. Each sequence is identified by it's
relfilenode, and at transaction commit we discard all sequences created
in that particular transaction. For each sequence we track the XID of
the (sub)transaction that created it, and we cleanup the sequences for
each subtransaction when it completes (commit/rollback).
We don't use the XID to check if it's the same top-level transaction.
It's enough to know it was created in an in-progress transaction, and we
know it must be the current one because otherwise it wouldn't see the
sequence object.
The main changes in this patch are:
1) ensure WAL-logging of all necessary info for sequence advances
We need to be able to associate the advance with a XID, but until now
sequence advance might have XID 0 if it was the first thing the
transaction did. So we ensure the transaction has XID.
Note: This is needed because of subxacts. A XID 0 might still have the
sequence created in a different subxact of the same top-level xact.
Then there's a "created" flag added to the xl_seq_rec, which is
necessary to differentiate WAL for created/reset sequences.
2) decoding / queueing of sequences into ReorderBuffer
This is mostly copy-paste of existing code for other decoded events.
3) tracking sequences created in in-progress transactions
We use a simple hash table, indexed by relfilenode.
---
doc/src/sgml/logicaldecoding.sgml | 62 ++-
src/backend/commands/sequence.c | 32 ++
src/backend/replication/logical/decode.c | 124 ++++++
src/backend/replication/logical/logical.c | 88 ++++
.../replication/logical/reorderbuffer.c | 400 ++++++++++++++++++
src/include/access/rmgrlist.h | 2 +-
src/include/commands/sequence.h | 1 +
src/include/replication/decode.h | 1 +
src/include/replication/output_plugin.h | 27 ++
src/include/replication/reorderbuffer.h | 43 +-
10 files changed, 775 insertions(+), 5 deletions(-)
diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml
index b6353c7a125..60a2b7d1ccb 100644
--- a/doc/src/sgml/logicaldecoding.sgml
+++ b/doc/src/sgml/logicaldecoding.sgml
@@ -458,6 +458,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
LogicalDecodeFilterPrepareCB filter_prepare_cb;
@@ -472,6 +473,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
@@ -481,9 +483,12 @@ typedef void (*LogicalOutputPluginInit) (struct OutputPluginCallbacks *cb);
and <function>commit_cb</function> callbacks are required,
while <function>startup_cb</function>,
<function>filter_by_origin_cb</function>, <function>truncate_cb</function>,
+ <function>sequence_cb</function>,
and <function>shutdown_cb</function> are optional.
If <function>truncate_cb</function> is not set but a
<command>TRUNCATE</command> is to be decoded, the action will be ignored.
+ Similarly, if <function>sequence_cb</function> is not set and a sequence
+ change is to be decoded, the action will be ignored.
</para>
<para>
@@ -492,7 +497,8 @@ typedef void (*LogicalOutputPluginInit) (struct OutputPluginCallbacks *cb);
<function>stream_stop_cb</function>, <function>stream_abort_cb</function>,
<function>stream_commit_cb</function>, <function>stream_change_cb</function>,
and <function>stream_prepare_cb</function>
- are required, while <function>stream_message_cb</function> and
+ are required, while <function>stream_message_cb</function>,
+ <function>stream_sequence_cb</function>, and
<function>stream_truncate_cb</function> are optional.
</para>
@@ -808,6 +814,35 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
</para>
</sect3>
+ <sect3 id="logicaldecoding-output-plugin-sequence">
+ <title>Sequence Callback</title>
+
+ <para>
+ The optional <function>sequence_cb</function> callback is called for
+ actions that update a sequence value.
+<programlisting>
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+</programlisting>
+ The <parameter>txn</parameter> parameter contains meta information about
+ the transaction the sequence change is part of. Note however that it can
+ be NULL when the sequence change is non-transactional and the XID was not
+ assigned yet in the transaction which updated the sequence value.
+ The <parameter>sequence_lsn</parameter> has WAL location of the sequence
+ update. The <parameter>transactional</parameter> says if the sequence has
+ to be replayed as part of the transaction or directly.
+
+ The <parameter>last_value</parameter>, <parameter>log_cnt</parameter> and
+ <parameter>is_called</parameter> parameters describe the sequence change.
+ </para>
+ </sect3>
+
<sect3 id="logicaldecoding-output-plugin-filter-prepare">
<title>Prepare Filter Callback</title>
@@ -1017,6 +1052,26 @@ typedef void (*LogicalDecodeStreamMessageCB) (struct LogicalDecodingContext *ctx
</para>
</sect3>
+ <sect3 id="logicaldecoding-output-plugin-stream-sequence">
+ <title>Stream Sequence Callback</title>
+ <para>
+ The optional <function>stream_sequence_cb</function> callback is called
+ for actions that change a sequence in a block of streamed changes
+ (demarcated by <function>stream_start_cb</function> and
+ <function>stream_stop_cb</function> calls).
+<programlisting>
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+</programlisting>
+ </para>
+ </sect3>
+
<sect3 id="logicaldecoding-output-plugin-stream-truncate">
<title>Stream Truncate Callback</title>
<para>
@@ -1197,8 +1252,9 @@ OutputPluginWrite(ctx, true);
in-progress transactions. There are multiple required streaming callbacks
(<function>stream_start_cb</function>, <function>stream_stop_cb</function>,
<function>stream_abort_cb</function>, <function>stream_commit_cb</function>
- and <function>stream_change_cb</function>) and two optional callbacks
- (<function>stream_message_cb</function> and <function>stream_truncate_cb</function>).
+ and <function>stream_change_cb</function>) and multiple optional callbacks
+ (<function>stream_message_cb</function>, <function>stream_sequence_cb</function>,
+ and <function>stream_truncate_cb</function>).
Also, if streaming of two-phase commands is to be supported, then additional
callbacks must be provided. (See <xref linkend="logicaldecoding-two-phase-commits"/>
for details).
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 27cb6307581..358fb28b5bd 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -378,8 +378,13 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(rel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
START_CRIT_SECTION();
MarkBufferDirty(buf);
@@ -399,6 +404,7 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = rel->rd_node;
+ xlrec.created = true;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) tuple->t_data, tuple->t_len);
@@ -764,10 +770,28 @@ nextval_internal(Oid relid, bool check_permissions)
* It's sufficient to ensure the toplevel transaction has an xid, no need
* to assign xids subxacts, that'll already trigger an appropriate wait.
* (Have to do that here, so we're outside the critical section)
+ *
+ * We have to ensure we have a proper XID, which will be included in
+ * the XLOG record by XLogRecordAssemble. Otherwise the first nextval()
+ * in a subxact (without any preceding changes) would get XID 0, and it
+ * would then be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence. We only do that with
+ * wal_level=logical, though.
+ *
+ * XXX This might seem unnecessary, because if there's no XID the xact
+ * couldn't have done anything important yet, e.g. it could not have
+ * created a sequence. But that's incorrect, as it ignores subxacts. The
+ * current subtransaction might not have done anything yet (thus no XID),
+ * but an earlier one might have created the sequence.
*/
if (logit && RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -803,6 +827,7 @@ nextval_internal(Oid relid, bool check_permissions)
seq->log_cnt = 0;
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -977,8 +1002,13 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -999,6 +1029,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
+
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 3fb5a92a1a1..f0a00b3ab25 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -42,6 +42,7 @@
#include "replication/reorderbuffer.h"
#include "replication/snapbuild.h"
#include "storage/standby.h"
+#include "commands/sequence.h"
/* individual record(group)'s handlers */
static void DecodeInsert(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
@@ -63,6 +64,7 @@ static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
/* common function to decode tuples */
static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);
+static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);
/* helper functions for decoding transactions */
static inline bool FilterPrepare(LogicalDecodingContext *ctx,
@@ -1250,3 +1252,125 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
(txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
+
+/*
+ * Decode Sequence Tuple
+ */
+static void
+DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
+{
+ int datalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;
+
+ Assert(datalen >= 0);
+
+ tuple->tuple.t_len = datalen + SizeofHeapTupleHeader;;
+
+ ItemPointerSetInvalid(&tuple->tuple.t_self);
+
+ tuple->tuple.t_tableOid = InvalidOid;
+
+ memcpy(((char *) tuple->tuple.t_data),
+ data + sizeof(xl_seq_rec),
+ SizeofHeapTupleHeader);
+
+ memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,
+ data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,
+ datalen);
+}
+
+/*
+ * Handle sequence decode
+ *
+ * Decoding sequences is a bit tricky, because while most sequence actions
+ * are non-transactional (not subject to rollback), some need to be handled
+ * as transactional.
+ *
+ * By default, a sequence increment is non-transactional - we must not queue
+ * it in a transaction as other changes, because the transaction might get
+ * rolled back and we'd discard the increment. The downstream would not be
+ * notified about the increment, which is wrong.
+ *
+ * On the other hand, the sequence may be created in a transaction. In this
+ * case we *should* queue the change as other changes in the transaction,
+ * because we don't want to send the increments for unknown sequence to the
+ * plugin - it might get confused about which sequence it's related to etc.
+ */
+void
+sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ SnapBuild *builder = ctx->snapshot_builder;
+ ReorderBufferTupleBuf *tuplebuf;
+ RelFileNode target_node;
+ XLogReaderState *r = buf->record;
+ char *tupledata = NULL;
+ Size tuplelen;
+ Size datalen = 0;
+ TransactionId xid = XLogRecGetXid(r);
+ uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
+ xl_seq_rec *xlrec;
+ Snapshot snapshot;
+ RepOriginId origin_id = XLogRecGetOrigin(r);
+ bool transactional;
+
+ /* only decode changes flagged with XLOG_SEQ_LOG */
+ if (info != XLOG_SEQ_LOG)
+ elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
+
+ /*
+ * If we don't have snapshot or we are just fast-forwarding, there is no
+ * point in decoding messages.
+ */
+ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||
+ ctx->fast_forward)
+ return;
+
+ /* only interested in our database */
+ XLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);
+ if (target_node.dbNode != ctx->slot->data.database)
+ return;
+
+ /* output plugin doesn't look for this origin, no need to queue */
+ if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
+ return;
+
+ tupledata = XLogRecGetData(r);
+ datalen = XLogRecGetDataLen(r);
+ tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);
+
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_seq_rec *) XLogRecGetData(r);
+
+ /* XXX how could we have sequence change without data? */
+ if(!datalen || !tupledata)
+ return;
+
+ tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
+ DecodeSeqTuple(tupledata, datalen, tuplebuf);
+
+ /*
+ * Should we handle the sequence increment as transactional or not?
+ *
+ * If the sequence was created in a still-running transaction, treat
+ * it as transactional and queue the increments. Otherwise it needs
+ * to be treated as non-transactional, in which case we send it to
+ * the plugin right away.
+ */
+ transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
+ target_node,
+ xlrec->created);
+
+ /* Skip the change if already processed (per the snapshot). */
+ if (transactional &&
+ !SnapBuildProcessChange(builder, xid, buf->origptr))
+ return;
+ else if (!transactional &&
+ (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
+ SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ return;
+
+ /* Queue the increment (or send immediately if not transactional). */
+ snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
+ ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
+ origin_id, target_node, transactional,
+ xlrec->created, tuplebuf);
+}
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index 9bc3a2d8deb..934aa13f2d3 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -73,6 +73,10 @@ static void truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
static void message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
/* streaming callbacks */
static void stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
@@ -90,6 +94,10 @@ static void stream_change_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn
static void stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static void stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[], ReorderBufferChange *change);
@@ -218,6 +226,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->apply_truncate = truncate_cb_wrapper;
ctx->reorder->commit = commit_cb_wrapper;
ctx->reorder->message = message_cb_wrapper;
+ ctx->reorder->sequence = sequence_cb_wrapper;
/*
* To support streaming, we require start/stop/abort/commit/change
@@ -234,6 +243,7 @@ StartupDecodingContext(List *output_plugin_options,
(ctx->callbacks.stream_commit_cb != NULL) ||
(ctx->callbacks.stream_change_cb != NULL) ||
(ctx->callbacks.stream_message_cb != NULL) ||
+ (ctx->callbacks.stream_sequence_cb != NULL) ||
(ctx->callbacks.stream_truncate_cb != NULL);
/*
@@ -251,6 +261,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->stream_commit = stream_commit_cb_wrapper;
ctx->reorder->stream_change = stream_change_cb_wrapper;
ctx->reorder->stream_message = stream_message_cb_wrapper;
+ ctx->reorder->stream_sequence = stream_sequence_cb_wrapper;
ctx->reorder->stream_truncate = stream_truncate_cb_wrapper;
@@ -1203,6 +1214,42 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ if (ctx->callbacks.sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr first_lsn)
@@ -1508,6 +1555,47 @@ stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ /* We're only supposed to call this when streaming is supported. */
+ Assert(ctx->streaming);
+
+ /* this callback is optional */
+ if (ctx->callbacks.stream_sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "stream_sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[],
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 19b2ba2000c..50fd9dc901d 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -77,6 +77,35 @@
* a bit more memory to the oldest subtransactions, because it's likely
* they are the source for the next sequence of changes.
*
+ * When decoding sequences, we differentiate between a sequences created
+ * in a (running) transaction, and sequences created in other (already
+ * committed) transactions. Changes for sequences created in the same
+ * top-level transaction are treated as "transactional" i.e. just like
+ * any other change from that transaction (and discarded in case of a
+ * rollback). Changes for sequences created earlier are treated as not
+ * transactional - are processed immediately, as if performed outside
+ * any transaction (and thus not rolled back).
+ *
+ * This mixed behavior is necessary - sequences are non-transactional
+ * (e.g. ROLLBACK does not undo the sequence increments). But for new
+ * sequences, we need to handle them in a transactional way, because if
+ * we ever get some DDL support, the sequence won't exist until the
+ * transaction gets applied. So we need to ensure the increments don't
+ * happen until the sequence gets created.
+ *
+ * To differentiate which sequences are "old" and which were created
+ * in a still-running transaction, we track sequences created in running
+ * transactions in a hash table. Each sequence is identified by it's
+ * relfilenode, and at transaction commit we discard all sequences
+ * created in that particular transaction. For each sequence we track
+ * the XID of the (sub)transaction that created it, and we cleanup the
+ * sequences for each subtransaction when it completes (commit/rollback).
+ *
+ * We don't use the XID to check if it's the same top-level transaction.
+ * It's enough to know it was created in an in-progress transaction,
+ * and we know it must be the current one because otherwise it wouldn't
+ * see the sequence object.
+ *
* -------------------------------------------------------------------------
*/
#include "postgres.h"
@@ -91,6 +120,7 @@
#include "access/xact.h"
#include "access/xlog_internal.h"
#include "catalog/catalog.h"
+#include "commands/sequence.h"
#include "lib/binaryheap.h"
#include "miscadmin.h"
#include "pgstat.h"
@@ -116,6 +146,13 @@ typedef struct ReorderBufferTXNByIdEnt
ReorderBufferTXN *txn;
} ReorderBufferTXNByIdEnt;
+/* entry for hash table we use to track sequences created in running xacts */
+typedef struct ReorderBufferSequenceEnt
+{
+ RelFileNode rnode;
+ TransactionId xid;
+} ReorderBufferSequenceEnt;
+
/* data structures for (relfilenode, ctid) => (cmin, cmax) mapping */
typedef struct ReorderBufferTupleCidKey
{
@@ -339,6 +376,14 @@ ReorderBufferAllocate(void)
buffer->by_txn = hash_create("ReorderBufferByXid", 1000, &hash_ctl,
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+ /* hash table of sequences, mapping relfilenode to XID of transaction */
+ hash_ctl.keysize = sizeof(RelFileNode);
+ hash_ctl.entrysize = sizeof(ReorderBufferSequenceEnt);
+ hash_ctl.hcxt = buffer->context;
+
+ buffer->sequences = hash_create("ReorderBufferSequenceHash", 1000, &hash_ctl,
+ HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+
buffer->by_txn_last_xid = InvalidTransactionId;
buffer->by_txn_last_txn = NULL;
@@ -525,6 +570,13 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
change->data.truncate.relids = NULL;
}
break;
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
+ change->data.sequence.tuple = NULL;
+ }
+ break;
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
@@ -859,6 +911,230 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
}
}
+/*
+ * Treat the sequence increment as transactional?
+ *
+ * The hash table tracks all sequences created in in-progress transactions,
+ * so we simply do a lookup (the sequence is identified by relfilende). If
+ * we find a match, the increment should be handled as transactional.
+ */
+bool
+ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created)
+{
+ bool found = false;
+
+ if (created)
+ return true;
+
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND,
+ &found);
+
+ return found;
+}
+
+/*
+ * Cleanup sequences created in in-progress transactions.
+ *
+ * There's no way to search by XID, so we simply do a seqscan of all
+ * the entries in the hash table. Hopefully there are only a couple
+ * entries in most cases - people generally don't create many new
+ * sequences over and over.
+ */
+static void
+ReorderBufferSequenceCleanup(ReorderBuffer *rb, TransactionId xid)
+{
+ HASH_SEQ_STATUS scan_status;
+ ReorderBufferSequenceEnt *ent;
+
+ hash_seq_init(&scan_status, rb->sequences);
+ while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
+ {
+ /* skip sequences not from this transaction */
+ if (ent->xid != xid)
+ continue;
+
+ (void) hash_search(rb->sequences,
+ (void *) &(ent->rnode),
+ HASH_REMOVE, NULL);
+ }
+}
+
+/*
+ * A transactional sequence increment is queued to be processed upon commit
+ * and a non-transactional increment gets processed immediately.
+ *
+ * A sequence update may be both transactional and non-transactional. When
+ * created in a running transaction, treat it as transactional and queue
+ * the change in it. Otherwise treat it as non-transactional, so that we
+ * don't forget the increment in case of a rollback.
+ */
+void
+ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf)
+{
+ /*
+ * Change needs to be handled as transactional, because the sequence was
+ * created in a transaction that is still running. In that case all the
+ * changes need to be queued in that transaction, we must not send them
+ * to the downstream until the transaction commits.
+ *
+ * There's a bit of a trouble with subtransactions - we can't queue it
+ * into the subxact, because it might be rolled back and we'd lose the
+ * increment. We need to queue it into the same (sub)xact that created
+ * the sequence, which is why we track the XID in the hash table.
+ */
+ if (transactional)
+ {
+ MemoryContext oldcontext;
+ ReorderBufferChange *change;
+
+ /* lookup sequence by relfilenode */
+ ReorderBufferSequenceEnt *ent;
+ bool found;
+
+ /* transactional changes require a transaction */
+ Assert(xid != InvalidTransactionId);
+
+ /* search the lookup table (we ignore the return value, found is enough) */
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ created ? HASH_ENTER : HASH_FIND,
+ &found);
+
+ /*
+ * If this is the "create" increment, we must not have found any
+ * pre-existing entry in the hash table (i.e. there must not be
+ * any conflicting sequence).
+ */
+ Assert(!(created && found));
+
+ /* But we must have either created or found an existing entry. */
+ Assert(created || found);
+
+ /*
+ * When creating the sequence, remember the XID of the transaction
+ * that created id.
+ */
+ if (created)
+ ent->xid = xid;
+
+ /* XXX Maybe check that we're still in the same top-level xact? */
+
+ /* OK, allocate and queue the change */
+ oldcontext = MemoryContextSwitchTo(rb->context);
+
+ change = ReorderBufferGetChange(rb);
+
+ change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
+ change->origin_id = origin_id;
+
+ memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
+
+ change->data.sequence.tuple = tuplebuf;
+
+ /* add it to the same subxact that created the sequence */
+ ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+ else
+ {
+ /*
+ * This increment is for a sequence that was not created in any
+ * running transaction, so we treat it as non-transactional and
+ * just send it to the output plugin directly.
+ */
+ ReorderBufferTXN *txn = NULL;
+ volatile Snapshot snapshot_now = snapshot;
+ bool using_subtxn;
+
+#ifdef USE_ASSERT_CHECKING
+ /* All "creates" have to be handled as transactional. */
+ Assert(!created);
+
+ /* Make sure the sequence is not in the hash table. */
+ {
+ bool found;
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND, &found);
+ Assert(!found);
+ }
+#endif
+
+ if (xid != InvalidTransactionId)
+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
+
+ /* setup snapshot to allow catalog access */
+ SetupHistoricSnapshot(snapshot_now, NULL);
+
+ /*
+ * Decoding needs access to syscaches et al., which in turn use
+ * heavyweight locks and such. Thus we need to have enough state around to
+ * keep track of those. The easiest way is to simply use a transaction
+ * internally. That also allows us to easily enforce that nothing writes
+ * to the database by checking for xid assignments.
+ *
+ * When we're called via the SQL SRF there's already a transaction
+ * started, so start an explicit subtransaction there.
+ */
+ using_subtxn = IsTransactionOrTransactionBlock();
+
+ PG_TRY();
+ {
+ Relation relation;
+ HeapTuple tuple;
+ Form_pg_sequence_data seq;
+ Oid reloid;
+
+ if (using_subtxn)
+ BeginInternalSubTransaction("sequence");
+ else
+ StartTransactionCommand();
+
+ reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(rnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+ tuple = &tuplebuf->tuple;
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+
+ rb->sequence(rb, txn, lsn, relation, transactional,
+ seq->last_value, seq->log_cnt, seq->is_called);
+
+ RelationClose(relation);
+
+ TeardownHistoricSnapshot(false);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+ }
+ PG_CATCH();
+ {
+ TeardownHistoricSnapshot(true);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+ }
+}
+
/*
* AssertTXNLsnOrder
* Verify LSN ordering of transaction lists in the reorderbuffer
@@ -1535,6 +1811,9 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
&found);
Assert(found);
+ /* Remove sequences created in this transaction (if any). */
+ ReorderBufferSequenceCleanup(rb, txn->xid);
+
/* remove entries spilled to disk */
if (rbtxn_is_serialized(txn))
ReorderBufferRestoreCleanup(rb, txn);
@@ -1950,6 +2229,29 @@ ReorderBufferApplyMessage(ReorderBuffer *rb, ReorderBufferTXN *txn,
change->data.msg.message);
}
+/*
+ * Helper function for ReorderBufferProcessTXN for applying sequences.
+ */
+static inline void
+ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ Relation relation, ReorderBufferChange *change,
+ bool streaming)
+{
+ HeapTuple tuple;
+ Form_pg_sequence_data seq;
+
+ tuple = &change->data.sequence.tuple->tuple;
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+
+ /* Only ever called from ReorderBufferApplySequence, so transational. */
+ if (streaming)
+ rb->stream_sequence(rb, txn, change->lsn, relation, true,
+ seq->last_value, seq->log_cnt, seq->is_called);
+ else
+ rb->sequence(rb, txn, change->lsn, relation, true,
+ seq->last_value, seq->log_cnt, seq->is_called);
+}
+
/*
* Function to store the command id and snapshot at the end of the current
* stream so that we can reuse the same while sending the next stream.
@@ -2392,6 +2694,31 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
elog(ERROR, "tuplecid value in changequeue");
break;
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ Assert(snapshot_now);
+
+ reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
+ change->data.sequence.relnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+
+ if (!RelationIsValid(relation))
+ elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
+ reloid,
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ if (RelationIsLogicallyLogged(relation))
+ ReorderBufferApplySequence(rb, txn, relation, change, streaming);
+
+ RelationClose(relation);
+ break;
}
}
@@ -3776,6 +4103,39 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ char *data;
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
+ /* make sure we have enough space */
+ ReorderBufferSerializeReserve(rb, sz);
+
+ data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
+ /* might have been reallocated above */
+ ondisk = (ReorderBufferDiskChange *) rb->outbuf;
+
+ if (len)
+ {
+ memcpy(data, &tup->tuple, sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ memcpy(data, tup->tuple.t_data, len);
+ data += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4040,6 +4400,22 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4341,6 +4717,30 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ uint32 tuplelen = ((HeapTuple) data)->t_len;
+
+ change->data.sequence.tuple =
+ ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
+
+ /* restore ->tuple */
+ memcpy(&change->data.sequence.tuple->tuple, data,
+ sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ /* reset t_data pointer into the new tuplebuf */
+ change->data.sequence.tuple->tuple.t_data =
+ ReorderBufferTupleBufData(change->data.sequence.tuple);
+
+ /* restore tuple data itself */
+ memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
+ data += tuplelen;
+ }
+ break;
+
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
diff --git a/src/include/access/rmgrlist.h b/src/include/access/rmgrlist.h
index 9a74721c97c..cf8b6d48193 100644
--- a/src/include/access/rmgrlist.h
+++ b/src/include/access/rmgrlist.h
@@ -40,7 +40,7 @@ PG_RMGR(RM_BTREE_ID, "Btree", btree_redo, btree_desc, btree_identify, btree_xlog
PG_RMGR(RM_HASH_ID, "Hash", hash_redo, hash_desc, hash_identify, NULL, NULL, hash_mask, NULL)
PG_RMGR(RM_GIN_ID, "Gin", gin_redo, gin_desc, gin_identify, gin_xlog_startup, gin_xlog_cleanup, gin_mask, NULL)
PG_RMGR(RM_GIST_ID, "Gist", gist_redo, gist_desc, gist_identify, gist_xlog_startup, gist_xlog_cleanup, gist_mask, NULL)
-PG_RMGR(RM_SEQ_ID, "Sequence", seq_redo, seq_desc, seq_identify, NULL, NULL, seq_mask, NULL)
+PG_RMGR(RM_SEQ_ID, "Sequence", seq_redo, seq_desc, seq_identify, NULL, NULL, seq_mask, sequence_decode)
PG_RMGR(RM_SPGIST_ID, "SPGist", spg_redo, spg_desc, spg_identify, spg_xlog_startup, spg_xlog_cleanup, spg_mask, NULL)
PG_RMGR(RM_BRIN_ID, "BRIN", brin_redo, brin_desc, brin_identify, NULL, NULL, brin_mask, NULL)
PG_RMGR(RM_COMMIT_TS_ID, "CommitTs", commit_ts_redo, commit_ts_desc, commit_ts_identify, NULL, NULL, NULL, NULL)
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index f7542f2bccb..b9015a6f710 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -48,6 +48,7 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
typedef struct xl_seq_rec
{
RelFileNode node;
+ bool created; /* is this a CREATE SEQUENCE */
/* SEQUENCE TUPLE DATA FOLLOWS AT THE END */
} xl_seq_rec;
diff --git a/src/include/replication/decode.h b/src/include/replication/decode.h
index a33c2a718a7..8e07bb7409a 100644
--- a/src/include/replication/decode.h
+++ b/src/include/replication/decode.h
@@ -27,6 +27,7 @@ extern void heap2_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
extern void xact_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
extern void standby_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
extern void logicalmsg_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
+extern void sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
extern void LogicalDecodingProcessRecord(LogicalDecodingContext *ctx,
XLogReaderState *record);
diff --git a/src/include/replication/output_plugin.h b/src/include/replication/output_plugin.h
index 41157fda7cc..a16bebf76ca 100644
--- a/src/include/replication/output_plugin.h
+++ b/src/include/replication/output_plugin.h
@@ -88,6 +88,18 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
Size message_size,
const char *message);
+/*
+ * Called for the generic logical decoding sequences.
+ */
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Filter changes by origin.
*/
@@ -199,6 +211,19 @@ typedef void (*LogicalDecodeStreamMessageCB) (struct LogicalDecodingContext *ctx
Size message_size,
const char *message);
+/*
+ * Called for the streaming generic logical decoding sequences from in-progress
+ * transactions.
+ */
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Callback for streaming truncates from in-progress transactions.
*/
@@ -219,6 +244,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
@@ -237,6 +263,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index aa0a73382f6..859424bbd9b 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -64,7 +64,8 @@ enum ReorderBufferChangeType
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT,
- REORDER_BUFFER_CHANGE_TRUNCATE
+ REORDER_BUFFER_CHANGE_TRUNCATE,
+ REORDER_BUFFER_CHANGE_SEQUENCE
};
/* forward declaration */
@@ -158,6 +159,13 @@ typedef struct ReorderBufferChange
uint32 ninvalidations; /* Number of messages */
SharedInvalidationMessage *invalidations; /* invalidation message */
} inval;
+
+ /* Context data for Sequence changes */
+ struct
+ {
+ RelFileNode relnode;
+ ReorderBufferTupleBuf *tuple;
+ } sequence;
} data;
/*
@@ -430,6 +438,15 @@ typedef void (*ReorderBufferMessageCB) (ReorderBuffer *rb,
const char *prefix, Size sz,
const char *message);
+/* sequence callback signature */
+typedef void (*ReorderBufferSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* begin prepare callback signature */
typedef void (*ReorderBufferBeginPrepareCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn);
@@ -496,6 +513,15 @@ typedef void (*ReorderBufferStreamMessageCB) (
const char *prefix, Size sz,
const char *message);
+/* stream sequence callback signature */
+typedef void (*ReorderBufferStreamSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* stream truncate callback signature */
typedef void (*ReorderBufferStreamTruncateCB) (
ReorderBuffer *rb,
@@ -511,6 +537,12 @@ struct ReorderBuffer
*/
HTAB *by_txn;
+ /*
+ * relfilenode => XID lookup table for sequences created in a transaction
+ * (also includes altered sequences, which assigns new relfilenode)
+ */
+ HTAB *sequences;
+
/*
* Transactions that could be a toplevel xact, ordered by LSN of the first
* record bearing that xid.
@@ -541,6 +573,7 @@ struct ReorderBuffer
ReorderBufferApplyTruncateCB apply_truncate;
ReorderBufferCommitCB commit;
ReorderBufferMessageCB message;
+ ReorderBufferSequenceCB sequence;
/*
* Callbacks to be called when streaming a transaction at prepare time.
@@ -560,6 +593,7 @@ struct ReorderBuffer
ReorderBufferStreamCommitCB stream_commit;
ReorderBufferStreamChangeCB stream_change;
ReorderBufferStreamMessageCB stream_message;
+ ReorderBufferStreamSequenceCB stream_sequence;
ReorderBufferStreamTruncateCB stream_truncate;
/*
@@ -635,6 +669,10 @@ void ReorderBufferQueueChange(ReorderBuffer *, TransactionId,
void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapshot, XLogRecPtr lsn,
bool transactional, const char *prefix,
Size message_size, const char *message);
+void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
@@ -682,4 +720,7 @@ void ReorderBufferSetRestartPoint(ReorderBuffer *, XLogRecPtr ptr);
void StartupReorderBuffer(void);
+bool ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created);
+
#endif
--
2.31.1
0002-Add-support-for-decoding-sequences-to-test_-20220127.patchtext/x-patch; charset=UTF-8; name=0002-Add-support-for-decoding-sequences-to-test_-20220127.patchDownload
From dacaaefb0392dfea7f5423e9354bc3660cdc4aaf Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 24 Sep 2021 00:42:04 +0200
Subject: [PATCH 2/4] Add support for decoding sequences to test_decoding
---
contrib/test_decoding/Makefile | 3 +-
contrib/test_decoding/expected/sequence.out | 327 ++++++++++++++++++++
contrib/test_decoding/sql/sequence.sql | 119 +++++++
contrib/test_decoding/test_decoding.c | 69 +++++
4 files changed, 517 insertions(+), 1 deletion(-)
create mode 100644 contrib/test_decoding/expected/sequence.out
create mode 100644 contrib/test_decoding/sql/sequence.sql
diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index 9a31e0b8795..56ddc3abaeb 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -5,7 +5,8 @@ PGFILEDESC = "test_decoding - example of a logical decoding output plugin"
REGRESS = ddl xact rewrite toast permissions decoding_in_xact \
decoding_into_rel binary prepared replorigin time messages \
- spill slot truncate stream stats twophase twophase_stream
+ spill slot truncate stream stats twophase twophase_stream \
+ sequence
ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \
oldest_xmin snapshot_transfer subxact_without_top concurrent_stream \
twophase_snapshot
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
new file mode 100644
index 00000000000..c3c2a18a35b
--- /dev/null
+++ b/contrib/test_decoding/expected/sequence.out
@@ -0,0 +1,327 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+ ?column?
+----------
+ init
+(1 row)
+
+CREATE SEQUENCE test_sequence;
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4
+(1 row)
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 14
+(1 row)
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4000
+(1 row)
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, true);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, false);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3500
+(1 row)
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+--------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:1 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value:33 log_cnt:0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:4 log_cnt:0 is_called:1
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value:334 log_cnt:0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:14 log_cnt:32 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:14 log_cnt:0 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:4000 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value:4320 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3500 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3500 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3830 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value:3500 log_cnt:0 is_called:0
+ sequence public.test_sequence: transactional:0 last_value:3820 log_cnt:0 is_called:1
+(24 rows)
+
+DROP SEQUENCE test_sequence;
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3002
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3003
+(1 row)
+
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 6001
+(1 row)
+
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+ data
+------
+(0 rows)
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3003 | 30 | t
+(1 row)
+
+SELECT nextval('test_table_a_seq');
+ nextval
+---------
+ 3004
+(1 row)
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+-----------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 last_value:1 log_cnt:0 is_called:0
+ COMMIT
+ sequence public.test_table_a_seq: transactional:0 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value:3000 log_cnt:0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value:3033 log_cnt:0 is_called:1
+ BEGIN
+ COMMIT
+(8 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+---------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 last_value:1 log_cnt:0 is_called:0
+ sequence public.test_table_a_seq: transactional:1 last_value:33 log_cnt:0 is_called:1
+ table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ COMMIT
+(6 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ setval
+--------
+ 5000
+(1 row)
+
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3001 | 32 | t
+(1 row)
+
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+ data
+--------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value:1 log_cnt:0 is_called:0
+ sequence public.test_sequence: transactional:1 last_value:33 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 last_value:3000 log_cnt:0 is_called:1
+ sequence public.test_sequence: transactional:1 last_value:3033 log_cnt:0 is_called:1
+ COMMIT
+(6 rows)
+
+SELECT pg_drop_replication_slot('regression_slot');
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
new file mode 100644
index 00000000000..21c4b792224
--- /dev/null
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -0,0 +1,119 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+
+CREATE SEQUENCE test_sequence;
+
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, true);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, false);
+SELECT nextval('test_sequence');
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+DROP SEQUENCE test_sequence;
+
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+SELECT nextval('test_table_a_seq');
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+
+SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index 42fe91a2f9c..c25a4eb6a89 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -35,6 +35,7 @@ typedef struct
bool include_timestamp;
bool skip_empty_xacts;
bool only_local;
+ bool skip_sequences;
} TestDecodingData;
/*
@@ -76,6 +77,10 @@ static void pg_decode_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
TransactionId xid,
const char *gid);
@@ -116,6 +121,10 @@ static void pg_decode_stream_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_stream_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static void pg_decode_stream_truncate(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
int nrelations, Relation relations[],
@@ -141,6 +150,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->filter_by_origin_cb = pg_decode_filter;
cb->shutdown_cb = pg_decode_shutdown;
cb->message_cb = pg_decode_message;
+ cb->sequence_cb = pg_decode_sequence;
cb->filter_prepare_cb = pg_decode_filter_prepare;
cb->begin_prepare_cb = pg_decode_begin_prepare_txn;
cb->prepare_cb = pg_decode_prepare_txn;
@@ -153,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pg_decode_stream_commit;
cb->stream_change_cb = pg_decode_stream_change;
cb->stream_message_cb = pg_decode_stream_message;
+ cb->stream_sequence_cb = pg_decode_stream_sequence;
cb->stream_truncate_cb = pg_decode_stream_truncate;
}
@@ -175,6 +186,9 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
data->skip_empty_xacts = false;
data->only_local = false;
+ /* skip sequences by default for backwards compatibility */
+ data->skip_sequences = true;
+
ctx->output_plugin_private = data;
opt->output_type = OUTPUT_PLUGIN_TEXTUAL_OUTPUT;
@@ -265,6 +279,17 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "skip-sequences") == 0)
+ {
+
+ if (elem->arg == NULL)
+ continue; /* true by default */
+ else if (!parse_bool(strVal(elem->arg), &data->skip_sequences))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not parse value \"%s\" for parameter \"%s\"",
+ strVal(elem->arg), elem->defname)));
+ }
else
{
ereport(ERROR,
@@ -744,6 +769,28 @@ pg_decode_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d last_value:%zu log_cnt:%zu is_called:%d",
+ transactional, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
static void
pg_decode_stream_start(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn)
@@ -943,6 +990,28 @@ pg_decode_stream_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_stream_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip_sequences */
+ if (data->skip_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "streaming sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d last_value:%zu log_cnt:%zu is_called:%d",
+ transactional, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* In streaming mode, we don't display the detailed information of Truncate.
* See pg_decode_stream_change.
--
2.31.1
0003-Change-skip-sequences-to-false-by-default-20220127.patchtext/x-patch; charset=UTF-8; name=0003-Change-skip-sequences-to-false-by-default-20220127.patchDownload
From c8c20b177eeee9481ea2ff3628dcb628dc26c852 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Thu, 27 Jan 2022 00:23:13 +0100
Subject: [PATCH 3/4] Change skip-sequences to false by default
---
contrib/test_decoding/expected/ddl.out | 12 ++++++------
contrib/test_decoding/expected/decoding_in_xact.out | 2 +-
contrib/test_decoding/expected/decoding_into_rel.out | 10 +++++-----
contrib/test_decoding/expected/replorigin.out | 4 ++--
contrib/test_decoding/expected/rewrite.out | 4 ++--
contrib/test_decoding/expected/sequence.out | 12 ++++++------
contrib/test_decoding/expected/slot.out | 2 +-
contrib/test_decoding/expected/toast.out | 10 +++++-----
contrib/test_decoding/expected/truncate.out | 2 +-
contrib/test_decoding/sql/ddl.sql | 12 ++++++------
contrib/test_decoding/sql/decoding_in_xact.sql | 2 +-
contrib/test_decoding/sql/decoding_into_rel.sql | 10 +++++-----
contrib/test_decoding/sql/replorigin.sql | 4 ++--
contrib/test_decoding/sql/rewrite.sql | 4 ++--
contrib/test_decoding/sql/sequence.sql | 12 ++++++------
contrib/test_decoding/sql/slot.sql | 2 +-
contrib/test_decoding/sql/toast.sql | 10 +++++-----
contrib/test_decoding/sql/truncate.sql | 2 +-
contrib/test_decoding/test_decoding.c | 6 ++----
19 files changed, 60 insertions(+), 62 deletions(-)
diff --git a/contrib/test_decoding/expected/ddl.out b/contrib/test_decoding/expected/ddl.out
index 4ff0044c787..ebc4b53f48c 100644
--- a/contrib/test_decoding/expected/ddl.out
+++ b/contrib/test_decoding/expected/ddl.out
@@ -89,7 +89,7 @@ COMMIT;
ALTER TABLE replication_example RENAME COLUMN text TO somenum;
INSERT INTO replication_example(somedata, somenum) VALUES (4, 1);
-- collect all changes
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
data
---------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -196,7 +196,7 @@ CREATE TABLE tr_unique(id2 serial unique NOT NULL, data int);
INSERT INTO tr_unique(data) VALUES(10);
ALTER TABLE tr_unique RENAME TO tr_pkey;
ALTER TABLE tr_pkey ADD COLUMN id serial primary key;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-rewrites', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-rewrites', '1', 'skip-sequences', '1');
data
-----------------------------------------------------------------------------
BEGIN
@@ -242,7 +242,7 @@ INSERT INTO tr_oddlength VALUES('ab', 'foo');
COMMIT;
/* display results, but hide most of the output */
SELECT count(*), min(data), max(data)
-FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1')
GROUP BY substring(data, 1, 24)
ORDER BY 1,2;
count | min | max
@@ -309,7 +309,7 @@ RELEASE SAVEPOINT c;
INSERT INTO tr_sub(path) VALUES ('1-top-2-#1');
RELEASE SAVEPOINT b;
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
data
----------------------------------------------------------------------
BEGIN
@@ -496,7 +496,7 @@ Options: user_catalog_table=false
INSERT INTO replication_metadata(relation, options)
VALUES ('zaphod', NULL);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
data
------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -613,7 +613,7 @@ INSERT INTO toasttable(toasted_col2) SELECT repeat(string_agg(to_char(g.i, 'FM00
UPDATE toasttable
SET toasted_col1 = (SELECT string_agg(g.i::text, '') FROM generate_series(1, 2000) g(i))
WHERE id = 1;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
data
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/decoding_in_xact.out b/contrib/test_decoding/expected/decoding_in_xact.out
index b65253f4630..8c7b134b26c 100644
--- a/contrib/test_decoding/expected/decoding_in_xact.out
+++ b/contrib/test_decoding/expected/decoding_in_xact.out
@@ -58,7 +58,7 @@ SELECT pg_current_xact_id() = '0';
-- don't show yet, haven't committed
INSERT INTO nobarf(data) VALUES('2');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
data
-----------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/decoding_into_rel.out b/contrib/test_decoding/expected/decoding_into_rel.out
index 8fd3390066d..b1cc71ac2b6 100644
--- a/contrib/test_decoding/expected/decoding_into_rel.out
+++ b/contrib/test_decoding/expected/decoding_into_rel.out
@@ -19,7 +19,7 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'inc
CREATE TABLE somechange(id serial primary key);
INSERT INTO somechange DEFAULT VALUES;
CREATE TABLE changeresult AS
- SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
SELECT * FROM changeresult;
data
------------------------------------------------
@@ -29,9 +29,9 @@ SELECT * FROM changeresult;
(3 rows)
INSERT INTO changeresult
- SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
INSERT INTO changeresult
- SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
SELECT * FROM changeresult;
data
--------------------------------------------------------------------------------------------------------------------------------------------------
@@ -63,7 +63,7 @@ DROP TABLE somechange;
CREATE FUNCTION slot_changes_wrapper(slot_name name) RETURNS SETOF TEXT AS $$
BEGIN
RETURN QUERY
- SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
END$$ LANGUAGE plpgsql;
SELECT * FROM slot_changes_wrapper('regression_slot');
slot_changes_wrapper
@@ -84,7 +84,7 @@ SELECT * FROM slot_changes_wrapper('regression_slot');
COMMIT
(14 rows)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
data
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/replorigin.out b/contrib/test_decoding/expected/replorigin.out
index 2e9ef7c823b..3265e25e423 100644
--- a/contrib/test_decoding/expected/replorigin.out
+++ b/contrib/test_decoding/expected/replorigin.out
@@ -72,9 +72,9 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_d
-- origin tx
INSERT INTO origin_tbl(data) VALUES ('will be replicated and decoded and decoded again');
INSERT INTO target_tbl(data)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
-- as is normal, the insert into target_tbl shows up
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
data
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/rewrite.out b/contrib/test_decoding/expected/rewrite.out
index b30999c436b..0947c2c689c 100644
--- a/contrib/test_decoding/expected/rewrite.out
+++ b/contrib/test_decoding/expected/rewrite.out
@@ -64,7 +64,7 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_d
CREATE TABLE replication_example(id SERIAL PRIMARY KEY, somedata int, text varchar(120));
INSERT INTO replication_example(somedata) VALUES (1);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
data
----------------------------------------------------------------------------------------------------------
BEGIN
@@ -141,7 +141,7 @@ VACUUM FULL pg_proc; VACUUM FULL pg_description; VACUUM FULL pg_shdescription; V
INSERT INTO replication_example(somedata, testcolumn1, testcolumn3) VALUES (8, 6, 1);
VACUUM FULL pg_proc; VACUUM FULL pg_description; VACUUM FULL pg_shdescription; VACUUM FULL iamalargetable;
INSERT INTO replication_example(somedata, testcolumn1, testcolumn3) VALUES (9, 7, 1);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
data
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
index c3c2a18a35b..3d165bf51d3 100644
--- a/contrib/test_decoding/expected/sequence.out
+++ b/contrib/test_decoding/expected/sequence.out
@@ -87,7 +87,7 @@ SELECT nextval('test_sequence');
(1 row)
-- show results and drop sequence
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
data
--------------------------------------------------------------------------------------
BEGIN
@@ -172,7 +172,7 @@ SELECT nextval('test_sequence');
(1 row)
ROLLBACK;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
data
------
(0 rows)
@@ -197,7 +197,7 @@ INSERT INTO test_table (b) VALUES (700);
INSERT INTO test_table (b) VALUES (800);
INSERT INTO test_table (b) VALUES (900);
ROLLBACK;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
data
------
(0 rows)
@@ -236,7 +236,7 @@ SELECT nextval('test_table_a_seq');
(1 row)
DROP TABLE test_table;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
data
-----------------------------------------------------------------------------------------
BEGIN
@@ -259,7 +259,7 @@ INSERT INTO test_table (b) VALUES (300);
ROLLBACK TO SAVEPOINT a;
DROP TABLE test_table;
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
data
---------------------------------------------------------------------------------------
BEGIN
@@ -308,7 +308,7 @@ SELECT * FROM test_sequence;
DROP SEQUENCE test_sequence;
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
data
--------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/slot.out b/contrib/test_decoding/expected/slot.out
index 63a9940f73a..c192c36af02 100644
--- a/contrib/test_decoding/expected/slot.out
+++ b/contrib/test_decoding/expected/slot.out
@@ -95,7 +95,7 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot2', 'test_
(1 row)
INSERT INTO replication_example(somedata, text) VALUES (1, 3);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
data
---------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/toast.out b/contrib/test_decoding/expected/toast.out
index cd03e9d50a1..70b7e58a9cb 100644
--- a/contrib/test_decoding/expected/toast.out
+++ b/contrib/test_decoding/expected/toast.out
@@ -52,7 +52,7 @@ CREATE TABLE toasted_copy (
);
ALTER TABLE toasted_copy ALTER COLUMN data SET STORAGE EXTERNAL;
\copy toasted_copy FROM STDIN
-SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
substr
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -316,7 +316,7 @@ SELECT pg_column_size(toasted_key) > 2^16 FROM toasted_several;
t
(1 row)
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
regexp_replace
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -327,7 +327,7 @@ SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_sl
-- test update of a toasted key without changing it
UPDATE toasted_several SET toasted_col1 = toasted_key;
UPDATE toasted_several SET toasted_col2 = toasted_col1;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
regexp_replace
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -350,7 +350,7 @@ UPDATE toasted_several SET toasted_col1 = toasted_col2 WHERE id = 1;
DELETE FROM toasted_several WHERE id = 1;
COMMIT;
DROP TABLE toasted_several;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1')
WHERE data NOT LIKE '%INSERT: %';
regexp_replace
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
@@ -373,7 +373,7 @@ INSERT INTO tbl1 VALUES(1, repeat('a', 4000)) ;
ALTER TABLE tbl1 ADD COLUMN id serial primary key;
INSERT INTO tbl2 VALUES(1);
commit;
-SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
substr
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/truncate.out b/contrib/test_decoding/expected/truncate.out
index e64d377214a..3d89e8a4d39 100644
--- a/contrib/test_decoding/expected/truncate.out
+++ b/contrib/test_decoding/expected/truncate.out
@@ -11,7 +11,7 @@ CREATE TABLE tab2 (a int primary key, b int);
TRUNCATE tab1;
TRUNCATE tab1, tab1 RESTART IDENTITY CASCADE;
TRUNCATE tab1, tab2;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
data
------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/sql/ddl.sql b/contrib/test_decoding/sql/ddl.sql
index 1b3866d0153..bdb0f4e7f20 100644
--- a/contrib/test_decoding/sql/ddl.sql
+++ b/contrib/test_decoding/sql/ddl.sql
@@ -64,7 +64,7 @@ ALTER TABLE replication_example RENAME COLUMN text TO somenum;
INSERT INTO replication_example(somedata, somenum) VALUES (4, 1);
-- collect all changes
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
ALTER TABLE replication_example ALTER COLUMN somenum TYPE int4 USING (somenum::int4);
-- check that this doesn't produce any changes from the heap rewrite
@@ -97,7 +97,7 @@ CREATE TABLE tr_unique(id2 serial unique NOT NULL, data int);
INSERT INTO tr_unique(data) VALUES(10);
ALTER TABLE tr_unique RENAME TO tr_pkey;
ALTER TABLE tr_pkey ADD COLUMN id serial primary key;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-rewrites', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-rewrites', '1', 'skip-sequences', '1');
INSERT INTO tr_pkey(data) VALUES(1);
--show deletion with primary key
@@ -121,7 +121,7 @@ COMMIT;
/* display results, but hide most of the output */
SELECT count(*), min(data), max(data)
-FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1')
GROUP BY substring(data, 1, 24)
ORDER BY 1,2;
@@ -173,7 +173,7 @@ INSERT INTO tr_sub(path) VALUES ('1-top-2-#1');
RELEASE SAVEPOINT b;
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
-- check that we handle xlog assignments correctly
BEGIN;
@@ -287,7 +287,7 @@ ALTER TABLE replication_metadata SET (user_catalog_table = false);
INSERT INTO replication_metadata(relation, options)
VALUES ('zaphod', NULL);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
/*
* check whether we handle updates/deletes correct with & without a pkey
@@ -398,7 +398,7 @@ UPDATE toasttable
SET toasted_col1 = (SELECT string_agg(g.i::text, '') FROM generate_series(1, 2000) g(i))
WHERE id = 1;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
INSERT INTO toasttable(toasted_col1) SELECT string_agg(g.i::text, '') FROM generate_series(1, 2000) g(i);
diff --git a/contrib/test_decoding/sql/decoding_in_xact.sql b/contrib/test_decoding/sql/decoding_in_xact.sql
index 108782dc2e9..738e4d26ecc 100644
--- a/contrib/test_decoding/sql/decoding_in_xact.sql
+++ b/contrib/test_decoding/sql/decoding_in_xact.sql
@@ -32,7 +32,7 @@ BEGIN;
SELECT pg_current_xact_id() = '0';
-- don't show yet, haven't committed
INSERT INTO nobarf(data) VALUES('2');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
COMMIT;
INSERT INTO nobarf(data) VALUES('3');
diff --git a/contrib/test_decoding/sql/decoding_into_rel.sql b/contrib/test_decoding/sql/decoding_into_rel.sql
index 1068cec5888..b402fbc185c 100644
--- a/contrib/test_decoding/sql/decoding_into_rel.sql
+++ b/contrib/test_decoding/sql/decoding_into_rel.sql
@@ -15,14 +15,14 @@ CREATE TABLE somechange(id serial primary key);
INSERT INTO somechange DEFAULT VALUES;
CREATE TABLE changeresult AS
- SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
SELECT * FROM changeresult;
INSERT INTO changeresult
- SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
INSERT INTO changeresult
- SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
SELECT * FROM changeresult;
DROP TABLE changeresult;
@@ -32,11 +32,11 @@ DROP TABLE somechange;
CREATE FUNCTION slot_changes_wrapper(slot_name name) RETURNS SETOF TEXT AS $$
BEGIN
RETURN QUERY
- SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
END$$ LANGUAGE plpgsql;
SELECT * FROM slot_changes_wrapper('regression_slot');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
SELECT 'stop' FROM pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/sql/replorigin.sql b/contrib/test_decoding/sql/replorigin.sql
index 2e28a487773..be491874c4d 100644
--- a/contrib/test_decoding/sql/replorigin.sql
+++ b/contrib/test_decoding/sql/replorigin.sql
@@ -41,10 +41,10 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_d
-- origin tx
INSERT INTO origin_tbl(data) VALUES ('will be replicated and decoded and decoded again');
INSERT INTO target_tbl(data)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
-- as is normal, the insert into target_tbl shows up
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
INSERT INTO origin_tbl(data) VALUES ('will be replicated, but not decoded again');
diff --git a/contrib/test_decoding/sql/rewrite.sql b/contrib/test_decoding/sql/rewrite.sql
index 62dead3a9b1..07e27bf49da 100644
--- a/contrib/test_decoding/sql/rewrite.sql
+++ b/contrib/test_decoding/sql/rewrite.sql
@@ -35,7 +35,7 @@ SELECT pg_relation_size((SELECT reltoastrelid FROM pg_class WHERE oid = 'pg_shde
SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
CREATE TABLE replication_example(id SERIAL PRIMARY KEY, somedata int, text varchar(120));
INSERT INTO replication_example(somedata) VALUES (1);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
BEGIN;
INSERT INTO replication_example(somedata) VALUES (2);
@@ -98,7 +98,7 @@ VACUUM FULL pg_proc; VACUUM FULL pg_description; VACUUM FULL pg_shdescription; V
INSERT INTO replication_example(somedata, testcolumn1, testcolumn3) VALUES (8, 6, 1);
VACUUM FULL pg_proc; VACUUM FULL pg_description; VACUUM FULL pg_shdescription; VACUUM FULL iamalargetable;
INSERT INTO replication_example(somedata, testcolumn1, testcolumn3) VALUES (9, 7, 1);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
SELECT pg_drop_replication_slot('regression_slot');
DROP TABLE IF EXISTS replication_example;
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
index 21c4b792224..42ad9113bf4 100644
--- a/contrib/test_decoding/sql/sequence.sql
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -28,7 +28,7 @@ SELECT setval('test_sequence', 3500, false);
SELECT nextval('test_sequence');
-- show results and drop sequence
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
DROP SEQUENCE test_sequence;
-- rollback on sequence creation and update
@@ -46,7 +46,7 @@ ALTER SEQUENCE test_sequence RESTART WITH 6000;
INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
SELECT nextval('test_sequence');
ROLLBACK;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
-- rollback on table creation with serial column
BEGIN;
@@ -63,7 +63,7 @@ INSERT INTO test_table (b) VALUES (700);
INSERT INTO test_table (b) VALUES (800);
INSERT INTO test_table (b) VALUES (900);
ROLLBACK;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '0');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
-- rollback on table with serial column
CREATE TABLE test_table (a SERIAL, b INT);
@@ -87,7 +87,7 @@ SELECT * from test_table_a_seq;
SELECT nextval('test_table_a_seq');
DROP TABLE test_table;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
-- savepoint test on table with serial column
BEGIN;
@@ -99,7 +99,7 @@ INSERT INTO test_table (b) VALUES (300);
ROLLBACK TO SAVEPOINT a;
DROP TABLE test_table;
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
-- savepoint test on table with serial column
BEGIN;
@@ -114,6 +114,6 @@ ROLLBACK TO SAVEPOINT a;
SELECT * FROM test_sequence;
DROP SEQUENCE test_sequence;
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0', 'skip-sequences', '0');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/sql/slot.sql b/contrib/test_decoding/sql/slot.sql
index 1aa27c56674..6b79912f32e 100644
--- a/contrib/test_decoding/sql/slot.sql
+++ b/contrib/test_decoding/sql/slot.sql
@@ -50,7 +50,7 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot2', 'test_
INSERT INTO replication_example(somedata, text) VALUES (1, 3);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
INSERT INTO replication_example(somedata, text) VALUES (1, 4);
diff --git a/contrib/test_decoding/sql/toast.sql b/contrib/test_decoding/sql/toast.sql
index d1c560a174d..1522cd4fdc8 100644
--- a/contrib/test_decoding/sql/toast.sql
+++ b/contrib/test_decoding/sql/toast.sql
@@ -264,7 +264,7 @@ ALTER TABLE toasted_copy ALTER COLUMN data SET STORAGE EXTERNAL;
202 untoasted199
203 untoasted200
\.
-SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
-- test we can decode "old" tuples bigger than the max heap tuple size correctly
DROP TABLE IF EXISTS toasted_several;
@@ -287,13 +287,13 @@ UPDATE pg_attribute SET attstorage = 'x' WHERE attrelid = 'toasted_several_pkey'
INSERT INTO toasted_several(toasted_key) VALUES(repeat('9876543210', 10000));
SELECT pg_column_size(toasted_key) > 2^16 FROM toasted_several;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
-- test update of a toasted key without changing it
UPDATE toasted_several SET toasted_col1 = toasted_key;
UPDATE toasted_several SET toasted_col2 = toasted_col1;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
/*
* update with large tuplebuf, in a transaction large enough to force to spool to disk
@@ -306,7 +306,7 @@ COMMIT;
DROP TABLE toasted_several;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1')
WHERE data NOT LIKE '%INSERT: %';
/*
@@ -322,6 +322,6 @@ INSERT INTO tbl1 VALUES(1, repeat('a', 4000)) ;
ALTER TABLE tbl1 ADD COLUMN id serial primary key;
INSERT INTO tbl2 VALUES(1);
commit;
-SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/sql/truncate.sql b/contrib/test_decoding/sql/truncate.sql
index 5633854e0df..dbb7f315602 100644
--- a/contrib/test_decoding/sql/truncate.sql
+++ b/contrib/test_decoding/sql/truncate.sql
@@ -10,5 +10,5 @@ TRUNCATE tab1;
TRUNCATE tab1, tab1 RESTART IDENTITY CASCADE;
TRUNCATE tab1, tab2;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'skip-sequences', '1');
SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index c25a4eb6a89..28b714be01f 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -184,11 +184,9 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
data->include_xids = true;
data->include_timestamp = false;
data->skip_empty_xacts = false;
+ data->skip_sequences = false;
data->only_local = false;
- /* skip sequences by default for backwards compatibility */
- data->skip_sequences = true;
-
ctx->output_plugin_private = data;
opt->output_type = OUTPUT_PLUGIN_TEXTUAL_OUTPUT;
@@ -283,7 +281,7 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
{
if (elem->arg == NULL)
- continue; /* true by default */
+ data->skip_sequences = true;
else if (!parse_bool(strVal(elem->arg), &data->skip_sequences))
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
--
2.31.1
0004-Add-support-for-decoding-sequences-to-built-20220127.patchtext/x-patch; charset=UTF-8; name=0004-Add-support-for-decoding-sequences-to-built-20220127.patchDownload
From b009385c8f4258f07896677c7aa32f258c7790e8 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Wed, 26 Jan 2022 02:48:21 +0100
Subject: [PATCH 4/4] Add support for decoding sequences to built-in
replication
---
doc/src/sgml/catalogs.sgml | 71 ++++
doc/src/sgml/ref/alter_publication.sgml | 24 +-
doc/src/sgml/ref/alter_subscription.sgml | 4 +-
src/backend/catalog/pg_publication.c | 149 ++++++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 350 +++++++++++++++++++-
src/backend/commands/sequence.c | 79 +++++
src/backend/commands/subscriptioncmds.c | 272 +++++++++++++++
src/backend/executor/execReplication.c | 2 +-
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 1 +
src/backend/parser/gram.y | 32 ++
src/backend/replication/logical/proto.c | 52 +++
src/backend/replication/logical/tablesync.c | 118 ++++++-
src/backend/replication/logical/worker.c | 60 ++++
src/backend/replication/pgoutput/pgoutput.c | 85 ++++-
src/backend/utils/cache/relcache.c | 4 +-
src/bin/psql/tab-complete.c | 6 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 14 +
src/include/commands/sequence.h | 1 +
src/include/nodes/parsenodes.h | 6 +
src/include/replication/logicalproto.h | 19 ++
src/include/replication/pgoutput.h | 1 +
src/test/regress/expected/rules.out | 8 +
src/test/subscription/t/027_sequences.pl | 196 +++++++++++
26 files changed, 1538 insertions(+), 32 deletions(-)
create mode 100644 src/test/subscription/t/027_sequences.pl
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 1e65c426b28..d0efcc2c9d8 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -9527,6 +9527,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11362,6 +11367,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index 7c7c27bf7ce..ca1ac3a9960 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCE IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -56,7 +58,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -123,6 +136,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 0b027cc3462..8f28cf03f40 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -147,7 +147,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ replication of tables and sequences that were added to the subscribed-to publications
since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -164,7 +164,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
Specifies whether to copy pre-existing data in the publications
that are being subscribed to when the replication starts.
The default is <literal>true</literal>. (Previously-subscribed
- tables are not copied.)
+ tables and sequences are not copied.)
</para>
</listitem>
</varlistentry>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index e14ca2f5630..1a9e05ba98b 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -54,7 +54,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -131,7 +132,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -503,6 +505,11 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ /* skip sequences here */
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ continue;
+
result = GetPubPartitionOptionRelations(result, pub_partopt,
pubrel->prrelid);
}
@@ -517,6 +524,49 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of relation oids for a publication (sequences only).
+ *
+ * This should only be used for normal publications, the FOR ALL TABLES
+ * should use GetAllSequencesPublicationRelations().
+ */
+List *
+GetPublicationSequenceRelations(Oid pubid)
+{
+ List *result;
+ Relation pubrelsrel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications associated with the relation. */
+ pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_rel_prpubid,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(pubid));
+
+ scan = systable_beginscan(pubrelsrel, PublicationRelPrrelidPrpubidIndexId,
+ true, NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Form_pg_publication_rel pubrel;
+
+ pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ }
+
+ systable_endscan(scan);
+ table_close(pubrelsrel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of publication oids for publications marked as FOR ALL TABLES.
*/
@@ -762,6 +812,46 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -784,10 +874,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -937,3 +1029,56 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ sequences = GetPublicationSequenceRelations(publication->oid);
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 3cb69b1f87b..b5cc33aca34 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -374,6 +374,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPT.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0e4bb97fb73..3bc2e8ccb66 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -59,6 +60,12 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
AlterPublicationStmt *stmt);
static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static List *OpenSequenceList(List *sequences);
+static void CloseSequenceList(List *rels);
+static void PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt);
+static void PublicationDropSequences(Oid pubid, List *rels, bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
List *options,
@@ -77,6 +84,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -101,6 +109,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -123,6 +132,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -149,7 +160,9 @@ parse_publication_options(ParseState *pstate,
*/
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas,
+ List **schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -167,12 +180,23 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLES_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA:
+ schemaid = get_namespace_oid(pubobj->name, false);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA:
@@ -186,6 +210,21 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
list_free(search_path);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA:
+ search_path = fetch_search_path(false);
+ if (search_path == NIL) /* nothing valid in search_path? */
+ ereport(ERROR,
+ errcode(ERRCODE_UNDEFINED_SCHEMA),
+ errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+ schemaid = linitial_oid(search_path);
+ list_free(search_path);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
default:
@@ -251,7 +290,10 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
bool publish_via_partition_root_given;
bool publish_via_partition_root;
AclResult aclresult;
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
/* must have CREATE privilege on database */
@@ -306,6 +348,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -330,26 +374,40 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
}
else
{
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
- if (list_length(schemaidlist) > 0 && !superuser())
+ if (list_length(tables_schemaidlist) > 0 && !superuser())
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES IN SCHEMA publication"));
- if (list_length(relations) > 0)
+ if (list_length(tables) > 0)
{
List *rels;
- rels = OpenTableList(relations);
- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ rels = OpenTableList(tables);
+ CheckObjSchemaNotAlreadyInPublication(rels, tables_schemaidlist,
PUBLICATIONOBJ_TABLE);
PublicationAddTables(puboid, rels, true, NULL);
CloseTableList(rels);
}
+ if (list_length(sequences) > 0)
+ {
+ List *rels;
+
+ rels = OpenSequenceList(sequences);
+ CheckObjSchemaNotAlreadyInPublication(rels, sequences_schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddTables(puboid, rels, true, NULL);
+ CloseSequenceList(rels);
+ }
+
if (list_length(schemaidlist) > 0)
{
/*
@@ -653,12 +711,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
*/
static void
CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
- List *tables, List *schemaidlist)
+ List *tables, List *tables_schemaidlist,
+ List *sequences, List *sequences_schemaidlist)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
- schemaidlist && !superuser())
+ (tables_schemaidlist || sequences_schemaidlist) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to add or set schemas")));
@@ -667,13 +726,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
* Check that user is allowed to manipulate the publication tables in
* schema
*/
- if (schemaidlist && pubform->puballtables)
+ if (tables_schemaidlist && pubform->puballtables)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables from schema cannot be added to, dropped from, or set on FOR ALL TABLES publications.")));
+ /*
+ * Check that user is allowed to manipulate the publication sequences in
+ * schema
+ */
+ if (sequences_schemaidlist && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
/* Check that user is allowed to manipulate the publication tables. */
if (tables && pubform->puballtables)
ereport(ERROR,
@@ -681,6 +751,108 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (sequences && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+ List *sequences, List *schemaidlist)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /*
+ * It is quite possible that for the SET case user has not specified any
+ * tables in which case we need to remove all the existing tables.
+ */
+ if (!sequences && stmt->action != AP_SetObjects)
+ return;
+
+ rels = OpenSequenceList(sequences);
+
+ if (stmt->action == AP_AddObjects)
+ {
+ List *schemas = NIL;
+
+ /*
+ * Check if the relation is member of the existing schema in the
+ * publication or member of the schema list specified.
+ */
+ schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid));
+ CheckObjSchemaNotAlreadyInPublication(rels, schemas,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddSequences(pubid, rels, false, stmt);
+ }
+ else if (stmt->action == AP_DropObjects)
+ PublicationDropSequences(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ PublicationRelInfo *newpubrel;
+
+ newpubrel = (PublicationRelInfo *) lfirst(newlc);
+ if (RelationGetRelid(newpubrel->relation) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+ /* Not yet in the list, open it and add to the list */
+ if (!found)
+ {
+ Relation oldrel;
+ PublicationRelInfo *pubrel;
+
+ /* Wrap relation into PublicationRelInfo */
+ oldrel = table_open(oldrelid, ShareUpdateExclusiveLock);
+
+ pubrel = palloc(sizeof(PublicationRelInfo));
+ pubrel->relation = oldrel;
+
+ delrels = lappend(delrels, pubrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropSequences(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddSequences(pubid, rels, true, stmt);
+
+ CloseSequenceList(delrels);
+ }
+
+ CloseSequenceList(rels);
}
/*
@@ -718,13 +890,21 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
AlterPublicationOptions(pstate, stmt, rel, tup);
else
{
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
- CheckAlterPublication(stmt, tup, relations, schemaidlist);
+ CheckAlterPublication(stmt, tup,
+ tables, tables_schemaidlist,
+ sequences, sequences_schemaidlist);
/*
* Lock the publication so nobody else can do anything with it. This
@@ -749,7 +929,9 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
errmsg("publication \"%s\" does not exist",
stmt->pubname));
- AlterPublicationTables(stmt, tup, relations, schemaidlist);
+ AlterPublicationTables(stmt, tup, tables, tables_schemaidlist);
+ AlterPublicationSequences(stmt, tup, sequences, sequences_schemaidlist);
+
AlterPublicationSchemas(stmt, tup, schemaidlist);
}
@@ -1157,6 +1339,144 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
}
}
+/*
+ * Open relations specified by a PublicationTable list.
+ * In the returned list of PublicationRelInfo, tables are locked
+ * in ShareUpdateExclusiveLock mode in order to add them to a publication.
+ */
+static List *
+OpenSequenceList(List *sequences)
+{
+ List *relids = NIL;
+ List *rels = NIL;
+ ListCell *lc;
+
+ /*
+ * Open, share-lock, and check all the explicitly-specified relations
+ */
+ foreach(lc, sequences)
+ {
+ PublicationTable *s = lfirst_node(PublicationTable, lc);
+ Relation rel;
+ Oid myrelid;
+ PublicationRelInfo *pub_rel;
+
+ /* Allow query cancel in case this takes a long time */
+ CHECK_FOR_INTERRUPTS();
+
+ rel = table_openrv(s->relation, ShareUpdateExclusiveLock);
+ myrelid = RelationGetRelid(rel);
+
+ /*
+ * Filter out duplicates if user specifies "foo, foo".
+ *
+ * Note that this algorithm is known to not be very efficient (O(N^2))
+ * but given that it only works on list of tables given to us by user
+ * it's deemed acceptable.
+ */
+ if (list_member_oid(relids, myrelid))
+ {
+ table_close(rel, ShareUpdateExclusiveLock);
+ continue;
+ }
+
+ pub_rel = palloc(sizeof(PublicationRelInfo));
+ pub_rel->relation = rel;
+ rels = lappend(rels, pub_rel);
+ relids = lappend_oid(relids, myrelid);
+ }
+
+ list_free(relids);
+
+ return rels;
+}
+
+/*
+ * Close all relations in the list.
+ */
+static void
+CloseSequenceList(List *rels)
+{
+ ListCell *lc;
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel;
+
+ pub_rel = (PublicationRelInfo *) lfirst(lc);
+ table_close(pub_rel->relation, NoLock);
+ }
+}
+
+/*
+ * Add listed tables to the publication.
+ */
+static void
+PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt)
+{
+ ListCell *lc;
+
+ Assert(!stmt || !stmt->for_all_sequences);
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel = (PublicationRelInfo *) lfirst(lc);
+ Relation rel = pub_rel->relation;
+ ObjectAddress obj;
+
+ /* Must be owner of the sequence or superuser. */
+ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind),
+ RelationGetRelationName(rel));
+
+ obj = publication_add_relation(pubid, pub_rel, if_not_exists);
+ if (stmt)
+ {
+ EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
+ (Node *) stmt);
+
+ InvokeObjectPostCreateHook(PublicationRelRelationId,
+ obj.objectId, 0);
+ }
+ }
+}
+
+/*
+ * Remove listed sequences from the publication.
+ */
+static void
+PublicationDropSequences(Oid pubid, List *rels, bool missing_ok)
+{
+ ObjectAddress obj;
+ ListCell *lc;
+ Oid prid;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ Oid relid = RelationGetRelid(rel);
+
+ prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
+ ObjectIdGetDatum(relid),
+ ObjectIdGetDatum(pubid));
+ if (!OidIsValid(prid))
+ {
+ if (missing_ok)
+ continue;
+
+ ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+ errmsg("relation \"%s\" is not part of the publication",
+ RelationGetRelationName(rel))));
+ }
+
+ ObjectAddressSet(obj, PublicationRelRelationId, prid);
+ performDeletion(&obj, DROP_CASCADE, 0);
+ }
+}
+
+
/*
* Internal workhorse for changing a publication owner
*/
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 358fb28b5bd..ef155e24db4 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -336,6 +336,85 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*/
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 3ef6607d246..5beb67e7652 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -85,6 +85,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -496,6 +497,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -534,6 +536,26 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -706,6 +728,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -797,6 +823,183 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1616,6 +1819,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 313c87398b2..78f14119d98 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -608,7 +608,7 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 90b5da51c95..64d64d338d3 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4850,6 +4850,7 @@ _copyCreatePublicationStmt(const CreatePublicationStmt *from)
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
COPY_SCALAR_FIELD(for_all_tables);
+ COPY_SCALAR_FIELD(for_all_sequences);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 06345da3ba8..0ad40ebe4a8 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2336,6 +2336,7 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
COMPARE_SCALAR_FIELD(action);
return true;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index b5966712ce1..83841e1eb9c 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -9760,6 +9760,26 @@ PublicationObjSpec:
$$->pubobjtype = PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA;
$$->location = @5;
}
+ | SEQUENCE relation_expr
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+ $$->pubtable = makeNode(PublicationTable);
+ $$->pubtable->relation = $2;
+ }
+ | ALL SEQUENCE IN_P SCHEMA ColId
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA;
+ $$->name = $5;
+ $$->location = @5;
+ }
+ | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA;
+ $$->location = @5;
+ }
| ColId
{
$$ = makeNode(PublicationObjSpec);
@@ -10095,6 +10115,12 @@ UnlistenStmt:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
@@ -10103,6 +10129,12 @@ UnlistenStmt:
* BEGIN / COMMIT / ROLLBACK
* (also older versions END / ABORT)
*
+ * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]
+ *
*****************************************************************************/
TransactionStmt:
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 953942692ce..e8ead1387ae 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -647,6 +647,56 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1203,6 +1253,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e596b69d466..c85867ee46b 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -359,6 +360,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -873,6 +880,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ elog(WARNING, "sequence %s info last_value %ld log_cnt %ld is_called %d",
+ quote_qualified_identifier(lrel.nspname, lrel.relname),
+ last_value, log_cnt, is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1134,10 +1234,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index d77bb32bb9e..68708d3907f 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -144,6 +144,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -1093,6 +1094,61 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by ResetSequence2 */
+ elog(WARNING, "locking sequence %d in exclusive mode", relid);
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ elog(WARNING, "applying sequence %s.%s transactional %d last_value %ld log_cnt %ld is_called %d",
+ seq.nspname, seq.seqname, seq.transactional, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2421,6 +2477,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index af8d51aee99..03a41d7ce07 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -49,6 +49,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -159,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -175,6 +180,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -188,6 +194,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -195,6 +202,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -260,6 +268,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -856,6 +874,51 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(rel))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, RelationGetRelid(rel));
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ rel,
+ xid,
+ sequence_lsn,
+ transactional,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1138,7 +1201,8 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->streamed_txns = NIL;
entry->replicate_valid = false;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->publish_as_relid = InvalidOid;
entry->map = NULL; /* will be set by maybe_send_schema() if
* needed */
@@ -1160,6 +1224,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Oid publish_as_relid = relid;
bool am_partition = get_rel_relispartition(relid);
char relkind = get_rel_relkind(relid);
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1183,12 +1248,23 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1243,10 +1319,12 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
}
if (entry->pubactions.pubinsert && entry->pubactions.pubupdate &&
- entry->pubactions.pubdelete && entry->pubactions.pubtruncate)
+ entry->pubactions.pubdelete && entry->pubactions.pubtruncate &&
+ entry->pubactions.pubsequence)
break;
}
@@ -1391,6 +1469,7 @@ rel_sync_cache_publication_cb(Datum arg, int cacheid, uint32 hashvalue)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
}
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 2e760e8a3bd..28fb78eb690 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5585,6 +5585,7 @@ GetRelationPublicationActions(Relation relation)
pubactions->pubupdate |= pubform->pubupdate;
pubactions->pubdelete |= pubform->pubdelete;
pubactions->pubtruncate |= pubform->pubtruncate;
+ pubactions->pubsequence |= pubform->pubsequence;
ReleaseSysCache(tup);
@@ -5593,7 +5594,8 @@ GetRelationPublicationActions(Relation relation)
* other publications.
*/
if (pubactions->pubinsert && pubactions->pubupdate &&
- pubactions->pubdelete && pubactions->pubtruncate)
+ pubactions->pubdelete && pubactions->pubtruncate &&
+ pubactions->pubsequence)
break;
}
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 502b5c57515..888aeca20b6 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1691,13 +1691,13 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> DROP */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
- COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY(Query_for_list_of_schemas
" AND nspname != 'pg_catalog' "
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 0859dc81cac..ea86c6e3933 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11529,6 +11529,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 841b9b6c253..e56286772f4 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct Publication
@@ -79,6 +89,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -121,6 +132,9 @@ extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
Oid relid);
+extern List *GetAllSequencesPublicationRelations(void);
+extern List *GetPublicationSequenceRelations(Oid pubid);
+
extern bool is_publishable_relation(Relation rel);
extern bool is_schema_publication(Oid pubid);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *targetrel,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index b9015a6f710..b7ea1dae9c4 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3e9bdc781f9..6dcb01b036a 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3654,6 +3654,10 @@ typedef enum PublicationObjSpecType
PUBLICATIONOBJ_TABLES_IN_SCHEMA, /* All tables in schema */
PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA, /* All tables in first element of
* search_path */
+ PUBLICATIONOBJ_SEQUENCE, /* Sequence type */
+ PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA, /* Sequences in schema type */
+ PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA, /* Get the first element of
+ * search_path */
PUBLICATIONOBJ_CONTINUATION /* Continuation of previous type */
} PublicationObjSpecType;
@@ -3673,6 +3677,7 @@ typedef struct CreatePublicationStmt
List *options; /* List of DefElem nodes */
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
} CreatePublicationStmt;
typedef enum AlterPublicationAction
@@ -3696,6 +3701,7 @@ typedef struct AlterPublicationStmt
*/
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
AlterPublicationAction action; /* What action to perform with the given
* objects */
} AlterPublicationStmt;
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 22fffaca62d..8f8c325522d 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -60,6 +60,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -117,6 +118,18 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -227,6 +240,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index 78aa9151ef5..a6f6843ada6 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -28,6 +28,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index d652f7b5fb4..dba7fadd038 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1451,6 +1451,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gpt(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gpt.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/subscription/t/027_sequences.pl b/src/test/subscription/t/027_sequences.pl
new file mode 100644
index 00000000000..fd941619071
--- /dev/null
+++ b/src/test/subscription/t/027_sequences.pl
@@ -0,0 +1,196 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More tests => 6;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub WITH (slot_name = seq_sub_slot)"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a rolled-back transaction - should not be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the new sequence in a transaction, and roll it back - in this case
+# it should not be replicated at commit
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+$node_subscriber->stop('fast');
+$node_publisher->stop('fast');
--
2.31.1
On 27.01.22 00:32, Tomas Vondra wrote:
On 1/26/22 14:01, Petr Jelinek wrote:
I would not remove it altogether, there is plenty of consumers of this
extension's output in the wild (even if I think it's
unfortunate) that might not be interested in sequences, but changing
it to off by default certainly makes sense.Indeed. Attached is an updated patch series, with 0003 switching it to
false by default (and fixing the fallout). For commit I'll merge that
into 0002, of course.
(could be done in separate patches too IMO)
test_decoding.c uses %zu several times for int64 values, which is not
correct. This should use INT64_FORMAT or %lld with a cast to (long long
int).
I don't know, maybe it's worth commenting somewhere how the expected
values in contrib/test_decoding/expected/sequence.out come about.
Otherwise, it's quite a puzzle to determine where the 3830 comes from,
for example.
I think the skip-sequences options is better turned around into a
positive name like include-sequences. There is a mix of "skip" and
"include" options in test_decoding, but there are more "include" ones
right now.
In the 0003, a few files have been missed, apparently, so the tests
don't fully pass. See attached patch.
I haven't looked fully through the 0004 patch, but I notice that there
was a confusing mix of FOR ALL SEQUENCE and FOR ALL SEQUENCES. I have
corrected that in the other attached patch.
Other than the mentioned cosmetic issues, I think 0001-0003 are ready to go.
Attachments:
0001-fixup-Change-skip-sequences-to-false-by-default.patchtext/plain; charset=UTF-8; name=0001-fixup-Change-skip-sequences-to-false-by-default.patchDownload
From d01e45ec0bba2afaeab95bceff0022fb32558ae0 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter@eisentraut.org>
Date: Thu, 27 Jan 2022 16:31:57 +0100
Subject: [PATCH 1/3] fixup! Change skip-sequences to false by default
---
contrib/test_decoding/expected/mxact.out | 8 ++++----
contrib/test_decoding/expected/ondisk_startup.out | 4 ++--
contrib/test_decoding/specs/mxact.spec | 2 +-
contrib/test_decoding/specs/ondisk_startup.spec | 2 +-
4 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/contrib/test_decoding/expected/mxact.out b/contrib/test_decoding/expected/mxact.out
index 03ad3df099..89d4623e74 100644
--- a/contrib/test_decoding/expected/mxact.out
+++ b/contrib/test_decoding/expected/mxact.out
@@ -7,7 +7,7 @@ step s0init: SELECT 'init' FROM pg_create_logical_replication_slot('isolation_sl
init
(1 row)
-step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'skip-sequences', 'true');
data
----
(0 rows)
@@ -27,7 +27,7 @@ t
(1 row)
step s0w: INSERT INTO do_write DEFAULT VALUES;
-step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'skip-sequences', 'true');
data
--------------------------------------------
BEGIN
@@ -50,7 +50,7 @@ step s0init: SELECT 'init' FROM pg_create_logical_replication_slot('isolation_sl
init
(1 row)
-step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'skip-sequences', 'true');
data
----
(0 rows)
@@ -71,7 +71,7 @@ t
step s0alter: ALTER TABLE do_write ADD column ts timestamptz;
step s0w: INSERT INTO do_write DEFAULT VALUES;
-step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'skip-sequences', 'true');
data
------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/ondisk_startup.out b/contrib/test_decoding/expected/ondisk_startup.out
index bc7ff07164..e8c47f0205 100644
--- a/contrib/test_decoding/expected/ondisk_startup.out
+++ b/contrib/test_decoding/expected/ondisk_startup.out
@@ -35,7 +35,7 @@ init
step s2c: COMMIT;
step s1insert: INSERT INTO do_write DEFAULT VALUES;
step s1checkpoint: CHECKPOINT;
-step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'skip-sequences', 'true');
data
--------------------------------------------------------------------
BEGIN
@@ -46,7 +46,7 @@ COMMIT
step s1insert: INSERT INTO do_write DEFAULT VALUES;
step s1alter: ALTER TABLE do_write ADD COLUMN addedbys1 int;
step s1insert: INSERT INTO do_write DEFAULT VALUES;
-step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'skip-sequences', 'true');
data
--------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/specs/mxact.spec b/contrib/test_decoding/specs/mxact.spec
index ea5b1aa2d6..c7ade959a5 100644
--- a/contrib/test_decoding/specs/mxact.spec
+++ b/contrib/test_decoding/specs/mxact.spec
@@ -13,7 +13,7 @@ teardown
session "s0"
setup { SET synchronous_commit=on; }
step "s0init" {SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot', 'test_decoding');}
-step "s0start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');}
+step "s0start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'skip-sequences', 'true');}
step "s0alter" {ALTER TABLE do_write ADD column ts timestamptz; }
step "s0w" { INSERT INTO do_write DEFAULT VALUES; }
diff --git a/contrib/test_decoding/specs/ondisk_startup.spec b/contrib/test_decoding/specs/ondisk_startup.spec
index 96ce87f1a4..e91f5868d4 100644
--- a/contrib/test_decoding/specs/ondisk_startup.spec
+++ b/contrib/test_decoding/specs/ondisk_startup.spec
@@ -16,7 +16,7 @@ session "s1"
setup { SET synchronous_commit=on; }
step "s1init" {SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot', 'test_decoding');}
-step "s1start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');}
+step "s1start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'skip-sequences', 'true');}
step "s1insert" { INSERT INTO do_write DEFAULT VALUES; }
step "s1checkpoint" { CHECKPOINT; }
step "s1alter" { ALTER TABLE do_write ADD COLUMN addedbys1 int; }
--
2.34.1
0003-fixup-Add-support-for-decoding-sequences-to-built-in.patchtext/plain; charset=UTF-8; name=0003-fixup-Add-support-for-decoding-sequences-to-built-in.patchDownload
From 698603ee4d4b9ca953ad50935ce625fd9cba340f Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter@eisentraut.org>
Date: Thu, 27 Jan 2022 16:35:18 +0100
Subject: [PATCH 3/3] fixup! Add support for decoding sequences to built-in
replication
---
doc/src/sgml/ref/alter_publication.sgml | 2 +-
src/backend/parser/gram.y | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index ca1ac3a996..9da8274ae2 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -33,7 +33,7 @@
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ... ]
SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
- ALL SEQUENCE IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 83841e1eb9..d26528bd6e 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -9767,7 +9767,7 @@ PublicationObjSpec:
$$->pubtable = makeNode(PublicationTable);
$$->pubtable->relation = $2;
}
- | ALL SEQUENCE IN_P SCHEMA ColId
+ | ALL SEQUENCES IN_P SCHEMA ColId
{
$$ = makeNode(PublicationObjSpec);
$$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA;
--
2.34.1
On 1/27/22 17:05, Peter Eisentraut wrote:
On 27.01.22 00:32, Tomas Vondra wrote:
On 1/26/22 14:01, Petr Jelinek wrote:
I would not remove it altogether, there is plenty of consumers of
this extension's output in the wild (even if I think it's
unfortunate) that might not be interested in sequences, but changing
it to off by default certainly makes sense.Indeed. Attached is an updated patch series, with 0003 switching it to
false by default (and fixing the fallout). For commit I'll merge that
into 0002, of course.(could be done in separate patches too IMO)
test_decoding.c uses %zu several times for int64 values, which is not
correct. This should use INT64_FORMAT or %lld with a cast to (long long
int).
Good point - INT64_FORMAT seems better. Also, the formatting was not
quite right (missing space between the colon), so I fixed that too.
I don't know, maybe it's worth commenting somewhere how the expected
values in contrib/test_decoding/expected/sequence.out come about.
Otherwise, it's quite a puzzle to determine where the 3830 comes from,
for example.
Yeah, that's probably a good idea - I had to think about the expected
output repeatedly, so an explanation would help. I'll do that in the
next version of the patch.
I think the skip-sequences options is better turned around into a
positive name like include-sequences. There is a mix of "skip" and
"include" options in test_decoding, but there are more "include" ones
right now.
Hmmm. I don't see much difference between skip-sequences and
include-sequences, but I don't feel very strongly about it either so I
switched that to include-sequences (which defaults to true).
In the 0003, a few files have been missed, apparently, so the tests
don't fully pass. See attached patch.
D'oh! I'd swear I've fixed those too.
I haven't looked fully through the 0004 patch, but I notice that there
was a confusing mix of FOR ALL SEQUENCE and FOR ALL SEQUENCES. I have
corrected that in the other attached patch.Other than the mentioned cosmetic issues, I think 0001-0003 are ready to go.
Thanks. I think we'll have time to look at 0004 more closely once the
initial parts get committed.
Attached is a a rebased/squashed version of the patches, with all the
fixes discussed here.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Logical-decoding-of-sequences-20220128.patchtext/x-patch; charset=UTF-8; name=0001-Logical-decoding-of-sequences-20220128.patchDownload
From fab6b7678a63bd53922f241dec0169ab145659c5 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Wed, 26 Jan 2022 02:07:15 +0100
Subject: [PATCH 1/3] Logical decoding of sequences
This adds the infrastructure for logical decoding of sequences. Support
for built-in logical replication and test_decoding is added separately.
When decoding sequences, we differentiate between a sequences created in
a (running) transaction, and sequences created in other (already
committed) transactions. Changes for sequences created in the same
top-level transaction are treated as "transactional" i.e. just like any
other change from that transaction (and discarded in case of a
rollback). Changes for sequences created earlier are treated as not
transactional - are processed immediately, as if performed outside any
transaction (and thus not rolled back).
This mixed behavior is necessary - sequences are non-transactional (e.g.
ROLLBACK does not undo the sequence increments). But for new sequences,
we need to handle them in a transactional way, because if we ever get
some DDL support, the sequence won't exist until the transaction gets
applied. So we need to ensure the increments don't happen until the
sequence gets created.
To differentiate which sequences are "old" and which were created in a
still-running transaction, we track sequences created in running
transactions in a hash table. Each sequence is identified by it's
relfilenode, and at transaction commit we discard all sequences created
in that particular transaction. For each sequence we track the XID of
the (sub)transaction that created it, and we cleanup the sequences for
each subtransaction when it completes (commit/rollback).
We don't use the XID to check if it's the same top-level transaction.
It's enough to know it was created in an in-progress transaction, and we
know it must be the current one because otherwise it wouldn't see the
sequence object.
The main changes in this patch are:
1) ensure WAL-logging of all necessary info for sequence advances
We need to be able to associate the advance with a XID, but until now
sequence advance might have XID 0 if it was the first thing the
transaction did. So we ensure the transaction has XID.
Note: This is needed because of subxacts. A XID 0 might still have the
sequence created in a different subxact of the same top-level xact.
Then there's a "created" flag added to the xl_seq_rec, which is
necessary to differentiate WAL for created/reset sequences.
2) decoding / queueing of sequences into ReorderBuffer
This is mostly copy-paste of existing code for other decoded events.
3) tracking sequences created in in-progress transactions
We use a simple hash table, indexed by relfilenode.
---
doc/src/sgml/logicaldecoding.sgml | 62 ++-
src/backend/commands/sequence.c | 32 ++
src/backend/replication/logical/decode.c | 124 ++++++
src/backend/replication/logical/logical.c | 88 ++++
.../replication/logical/reorderbuffer.c | 400 ++++++++++++++++++
src/include/access/rmgrlist.h | 2 +-
src/include/commands/sequence.h | 1 +
src/include/replication/decode.h | 1 +
src/include/replication/output_plugin.h | 27 ++
src/include/replication/reorderbuffer.h | 43 +-
10 files changed, 775 insertions(+), 5 deletions(-)
diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml
index b6353c7a125..60a2b7d1ccb 100644
--- a/doc/src/sgml/logicaldecoding.sgml
+++ b/doc/src/sgml/logicaldecoding.sgml
@@ -458,6 +458,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
LogicalDecodeFilterPrepareCB filter_prepare_cb;
@@ -472,6 +473,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
@@ -481,9 +483,12 @@ typedef void (*LogicalOutputPluginInit) (struct OutputPluginCallbacks *cb);
and <function>commit_cb</function> callbacks are required,
while <function>startup_cb</function>,
<function>filter_by_origin_cb</function>, <function>truncate_cb</function>,
+ <function>sequence_cb</function>,
and <function>shutdown_cb</function> are optional.
If <function>truncate_cb</function> is not set but a
<command>TRUNCATE</command> is to be decoded, the action will be ignored.
+ Similarly, if <function>sequence_cb</function> is not set and a sequence
+ change is to be decoded, the action will be ignored.
</para>
<para>
@@ -492,7 +497,8 @@ typedef void (*LogicalOutputPluginInit) (struct OutputPluginCallbacks *cb);
<function>stream_stop_cb</function>, <function>stream_abort_cb</function>,
<function>stream_commit_cb</function>, <function>stream_change_cb</function>,
and <function>stream_prepare_cb</function>
- are required, while <function>stream_message_cb</function> and
+ are required, while <function>stream_message_cb</function>,
+ <function>stream_sequence_cb</function>, and
<function>stream_truncate_cb</function> are optional.
</para>
@@ -808,6 +814,35 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
</para>
</sect3>
+ <sect3 id="logicaldecoding-output-plugin-sequence">
+ <title>Sequence Callback</title>
+
+ <para>
+ The optional <function>sequence_cb</function> callback is called for
+ actions that update a sequence value.
+<programlisting>
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+</programlisting>
+ The <parameter>txn</parameter> parameter contains meta information about
+ the transaction the sequence change is part of. Note however that it can
+ be NULL when the sequence change is non-transactional and the XID was not
+ assigned yet in the transaction which updated the sequence value.
+ The <parameter>sequence_lsn</parameter> has WAL location of the sequence
+ update. The <parameter>transactional</parameter> says if the sequence has
+ to be replayed as part of the transaction or directly.
+
+ The <parameter>last_value</parameter>, <parameter>log_cnt</parameter> and
+ <parameter>is_called</parameter> parameters describe the sequence change.
+ </para>
+ </sect3>
+
<sect3 id="logicaldecoding-output-plugin-filter-prepare">
<title>Prepare Filter Callback</title>
@@ -1017,6 +1052,26 @@ typedef void (*LogicalDecodeStreamMessageCB) (struct LogicalDecodingContext *ctx
</para>
</sect3>
+ <sect3 id="logicaldecoding-output-plugin-stream-sequence">
+ <title>Stream Sequence Callback</title>
+ <para>
+ The optional <function>stream_sequence_cb</function> callback is called
+ for actions that change a sequence in a block of streamed changes
+ (demarcated by <function>stream_start_cb</function> and
+ <function>stream_stop_cb</function> calls).
+<programlisting>
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+</programlisting>
+ </para>
+ </sect3>
+
<sect3 id="logicaldecoding-output-plugin-stream-truncate">
<title>Stream Truncate Callback</title>
<para>
@@ -1197,8 +1252,9 @@ OutputPluginWrite(ctx, true);
in-progress transactions. There are multiple required streaming callbacks
(<function>stream_start_cb</function>, <function>stream_stop_cb</function>,
<function>stream_abort_cb</function>, <function>stream_commit_cb</function>
- and <function>stream_change_cb</function>) and two optional callbacks
- (<function>stream_message_cb</function> and <function>stream_truncate_cb</function>).
+ and <function>stream_change_cb</function>) and multiple optional callbacks
+ (<function>stream_message_cb</function>, <function>stream_sequence_cb</function>,
+ and <function>stream_truncate_cb</function>).
Also, if streaming of two-phase commands is to be supported, then additional
callbacks must be provided. (See <xref linkend="logicaldecoding-two-phase-commits"/>
for details).
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 27cb6307581..358fb28b5bd 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -378,8 +378,13 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(rel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
START_CRIT_SECTION();
MarkBufferDirty(buf);
@@ -399,6 +404,7 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = rel->rd_node;
+ xlrec.created = true;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) tuple->t_data, tuple->t_len);
@@ -764,10 +770,28 @@ nextval_internal(Oid relid, bool check_permissions)
* It's sufficient to ensure the toplevel transaction has an xid, no need
* to assign xids subxacts, that'll already trigger an appropriate wait.
* (Have to do that here, so we're outside the critical section)
+ *
+ * We have to ensure we have a proper XID, which will be included in
+ * the XLOG record by XLogRecordAssemble. Otherwise the first nextval()
+ * in a subxact (without any preceding changes) would get XID 0, and it
+ * would then be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence. We only do that with
+ * wal_level=logical, though.
+ *
+ * XXX This might seem unnecessary, because if there's no XID the xact
+ * couldn't have done anything important yet, e.g. it could not have
+ * created a sequence. But that's incorrect, as it ignores subxacts. The
+ * current subtransaction might not have done anything yet (thus no XID),
+ * but an earlier one might have created the sequence.
*/
if (logit && RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -803,6 +827,7 @@ nextval_internal(Oid relid, bool check_permissions)
seq->log_cnt = 0;
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -977,8 +1002,13 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -999,6 +1029,8 @@ do_setval(Oid relid, int64 next, bool iscalled)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
+
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 3fb5a92a1a1..f0a00b3ab25 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -42,6 +42,7 @@
#include "replication/reorderbuffer.h"
#include "replication/snapbuild.h"
#include "storage/standby.h"
+#include "commands/sequence.h"
/* individual record(group)'s handlers */
static void DecodeInsert(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
@@ -63,6 +64,7 @@ static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
/* common function to decode tuples */
static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);
+static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);
/* helper functions for decoding transactions */
static inline bool FilterPrepare(LogicalDecodingContext *ctx,
@@ -1250,3 +1252,125 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
(txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
+
+/*
+ * Decode Sequence Tuple
+ */
+static void
+DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
+{
+ int datalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;
+
+ Assert(datalen >= 0);
+
+ tuple->tuple.t_len = datalen + SizeofHeapTupleHeader;;
+
+ ItemPointerSetInvalid(&tuple->tuple.t_self);
+
+ tuple->tuple.t_tableOid = InvalidOid;
+
+ memcpy(((char *) tuple->tuple.t_data),
+ data + sizeof(xl_seq_rec),
+ SizeofHeapTupleHeader);
+
+ memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,
+ data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,
+ datalen);
+}
+
+/*
+ * Handle sequence decode
+ *
+ * Decoding sequences is a bit tricky, because while most sequence actions
+ * are non-transactional (not subject to rollback), some need to be handled
+ * as transactional.
+ *
+ * By default, a sequence increment is non-transactional - we must not queue
+ * it in a transaction as other changes, because the transaction might get
+ * rolled back and we'd discard the increment. The downstream would not be
+ * notified about the increment, which is wrong.
+ *
+ * On the other hand, the sequence may be created in a transaction. In this
+ * case we *should* queue the change as other changes in the transaction,
+ * because we don't want to send the increments for unknown sequence to the
+ * plugin - it might get confused about which sequence it's related to etc.
+ */
+void
+sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ SnapBuild *builder = ctx->snapshot_builder;
+ ReorderBufferTupleBuf *tuplebuf;
+ RelFileNode target_node;
+ XLogReaderState *r = buf->record;
+ char *tupledata = NULL;
+ Size tuplelen;
+ Size datalen = 0;
+ TransactionId xid = XLogRecGetXid(r);
+ uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
+ xl_seq_rec *xlrec;
+ Snapshot snapshot;
+ RepOriginId origin_id = XLogRecGetOrigin(r);
+ bool transactional;
+
+ /* only decode changes flagged with XLOG_SEQ_LOG */
+ if (info != XLOG_SEQ_LOG)
+ elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
+
+ /*
+ * If we don't have snapshot or we are just fast-forwarding, there is no
+ * point in decoding messages.
+ */
+ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||
+ ctx->fast_forward)
+ return;
+
+ /* only interested in our database */
+ XLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);
+ if (target_node.dbNode != ctx->slot->data.database)
+ return;
+
+ /* output plugin doesn't look for this origin, no need to queue */
+ if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
+ return;
+
+ tupledata = XLogRecGetData(r);
+ datalen = XLogRecGetDataLen(r);
+ tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);
+
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_seq_rec *) XLogRecGetData(r);
+
+ /* XXX how could we have sequence change without data? */
+ if(!datalen || !tupledata)
+ return;
+
+ tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
+ DecodeSeqTuple(tupledata, datalen, tuplebuf);
+
+ /*
+ * Should we handle the sequence increment as transactional or not?
+ *
+ * If the sequence was created in a still-running transaction, treat
+ * it as transactional and queue the increments. Otherwise it needs
+ * to be treated as non-transactional, in which case we send it to
+ * the plugin right away.
+ */
+ transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
+ target_node,
+ xlrec->created);
+
+ /* Skip the change if already processed (per the snapshot). */
+ if (transactional &&
+ !SnapBuildProcessChange(builder, xid, buf->origptr))
+ return;
+ else if (!transactional &&
+ (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
+ SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ return;
+
+ /* Queue the increment (or send immediately if not transactional). */
+ snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
+ ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
+ origin_id, target_node, transactional,
+ xlrec->created, tuplebuf);
+}
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index 9bc3a2d8deb..934aa13f2d3 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -73,6 +73,10 @@ static void truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
static void message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
/* streaming callbacks */
static void stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
@@ -90,6 +94,10 @@ static void stream_change_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn
static void stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static void stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[], ReorderBufferChange *change);
@@ -218,6 +226,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->apply_truncate = truncate_cb_wrapper;
ctx->reorder->commit = commit_cb_wrapper;
ctx->reorder->message = message_cb_wrapper;
+ ctx->reorder->sequence = sequence_cb_wrapper;
/*
* To support streaming, we require start/stop/abort/commit/change
@@ -234,6 +243,7 @@ StartupDecodingContext(List *output_plugin_options,
(ctx->callbacks.stream_commit_cb != NULL) ||
(ctx->callbacks.stream_change_cb != NULL) ||
(ctx->callbacks.stream_message_cb != NULL) ||
+ (ctx->callbacks.stream_sequence_cb != NULL) ||
(ctx->callbacks.stream_truncate_cb != NULL);
/*
@@ -251,6 +261,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->stream_commit = stream_commit_cb_wrapper;
ctx->reorder->stream_change = stream_change_cb_wrapper;
ctx->reorder->stream_message = stream_message_cb_wrapper;
+ ctx->reorder->stream_sequence = stream_sequence_cb_wrapper;
ctx->reorder->stream_truncate = stream_truncate_cb_wrapper;
@@ -1203,6 +1214,42 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ if (ctx->callbacks.sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr first_lsn)
@@ -1508,6 +1555,47 @@ stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ /* We're only supposed to call this when streaming is supported. */
+ Assert(ctx->streaming);
+
+ /* this callback is optional */
+ if (ctx->callbacks.stream_sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "stream_sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[],
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 19b2ba2000c..50fd9dc901d 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -77,6 +77,35 @@
* a bit more memory to the oldest subtransactions, because it's likely
* they are the source for the next sequence of changes.
*
+ * When decoding sequences, we differentiate between a sequences created
+ * in a (running) transaction, and sequences created in other (already
+ * committed) transactions. Changes for sequences created in the same
+ * top-level transaction are treated as "transactional" i.e. just like
+ * any other change from that transaction (and discarded in case of a
+ * rollback). Changes for sequences created earlier are treated as not
+ * transactional - are processed immediately, as if performed outside
+ * any transaction (and thus not rolled back).
+ *
+ * This mixed behavior is necessary - sequences are non-transactional
+ * (e.g. ROLLBACK does not undo the sequence increments). But for new
+ * sequences, we need to handle them in a transactional way, because if
+ * we ever get some DDL support, the sequence won't exist until the
+ * transaction gets applied. So we need to ensure the increments don't
+ * happen until the sequence gets created.
+ *
+ * To differentiate which sequences are "old" and which were created
+ * in a still-running transaction, we track sequences created in running
+ * transactions in a hash table. Each sequence is identified by it's
+ * relfilenode, and at transaction commit we discard all sequences
+ * created in that particular transaction. For each sequence we track
+ * the XID of the (sub)transaction that created it, and we cleanup the
+ * sequences for each subtransaction when it completes (commit/rollback).
+ *
+ * We don't use the XID to check if it's the same top-level transaction.
+ * It's enough to know it was created in an in-progress transaction,
+ * and we know it must be the current one because otherwise it wouldn't
+ * see the sequence object.
+ *
* -------------------------------------------------------------------------
*/
#include "postgres.h"
@@ -91,6 +120,7 @@
#include "access/xact.h"
#include "access/xlog_internal.h"
#include "catalog/catalog.h"
+#include "commands/sequence.h"
#include "lib/binaryheap.h"
#include "miscadmin.h"
#include "pgstat.h"
@@ -116,6 +146,13 @@ typedef struct ReorderBufferTXNByIdEnt
ReorderBufferTXN *txn;
} ReorderBufferTXNByIdEnt;
+/* entry for hash table we use to track sequences created in running xacts */
+typedef struct ReorderBufferSequenceEnt
+{
+ RelFileNode rnode;
+ TransactionId xid;
+} ReorderBufferSequenceEnt;
+
/* data structures for (relfilenode, ctid) => (cmin, cmax) mapping */
typedef struct ReorderBufferTupleCidKey
{
@@ -339,6 +376,14 @@ ReorderBufferAllocate(void)
buffer->by_txn = hash_create("ReorderBufferByXid", 1000, &hash_ctl,
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+ /* hash table of sequences, mapping relfilenode to XID of transaction */
+ hash_ctl.keysize = sizeof(RelFileNode);
+ hash_ctl.entrysize = sizeof(ReorderBufferSequenceEnt);
+ hash_ctl.hcxt = buffer->context;
+
+ buffer->sequences = hash_create("ReorderBufferSequenceHash", 1000, &hash_ctl,
+ HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+
buffer->by_txn_last_xid = InvalidTransactionId;
buffer->by_txn_last_txn = NULL;
@@ -525,6 +570,13 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
change->data.truncate.relids = NULL;
}
break;
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
+ change->data.sequence.tuple = NULL;
+ }
+ break;
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
@@ -859,6 +911,230 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
}
}
+/*
+ * Treat the sequence increment as transactional?
+ *
+ * The hash table tracks all sequences created in in-progress transactions,
+ * so we simply do a lookup (the sequence is identified by relfilende). If
+ * we find a match, the increment should be handled as transactional.
+ */
+bool
+ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created)
+{
+ bool found = false;
+
+ if (created)
+ return true;
+
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND,
+ &found);
+
+ return found;
+}
+
+/*
+ * Cleanup sequences created in in-progress transactions.
+ *
+ * There's no way to search by XID, so we simply do a seqscan of all
+ * the entries in the hash table. Hopefully there are only a couple
+ * entries in most cases - people generally don't create many new
+ * sequences over and over.
+ */
+static void
+ReorderBufferSequenceCleanup(ReorderBuffer *rb, TransactionId xid)
+{
+ HASH_SEQ_STATUS scan_status;
+ ReorderBufferSequenceEnt *ent;
+
+ hash_seq_init(&scan_status, rb->sequences);
+ while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
+ {
+ /* skip sequences not from this transaction */
+ if (ent->xid != xid)
+ continue;
+
+ (void) hash_search(rb->sequences,
+ (void *) &(ent->rnode),
+ HASH_REMOVE, NULL);
+ }
+}
+
+/*
+ * A transactional sequence increment is queued to be processed upon commit
+ * and a non-transactional increment gets processed immediately.
+ *
+ * A sequence update may be both transactional and non-transactional. When
+ * created in a running transaction, treat it as transactional and queue
+ * the change in it. Otherwise treat it as non-transactional, so that we
+ * don't forget the increment in case of a rollback.
+ */
+void
+ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf)
+{
+ /*
+ * Change needs to be handled as transactional, because the sequence was
+ * created in a transaction that is still running. In that case all the
+ * changes need to be queued in that transaction, we must not send them
+ * to the downstream until the transaction commits.
+ *
+ * There's a bit of a trouble with subtransactions - we can't queue it
+ * into the subxact, because it might be rolled back and we'd lose the
+ * increment. We need to queue it into the same (sub)xact that created
+ * the sequence, which is why we track the XID in the hash table.
+ */
+ if (transactional)
+ {
+ MemoryContext oldcontext;
+ ReorderBufferChange *change;
+
+ /* lookup sequence by relfilenode */
+ ReorderBufferSequenceEnt *ent;
+ bool found;
+
+ /* transactional changes require a transaction */
+ Assert(xid != InvalidTransactionId);
+
+ /* search the lookup table (we ignore the return value, found is enough) */
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ created ? HASH_ENTER : HASH_FIND,
+ &found);
+
+ /*
+ * If this is the "create" increment, we must not have found any
+ * pre-existing entry in the hash table (i.e. there must not be
+ * any conflicting sequence).
+ */
+ Assert(!(created && found));
+
+ /* But we must have either created or found an existing entry. */
+ Assert(created || found);
+
+ /*
+ * When creating the sequence, remember the XID of the transaction
+ * that created id.
+ */
+ if (created)
+ ent->xid = xid;
+
+ /* XXX Maybe check that we're still in the same top-level xact? */
+
+ /* OK, allocate and queue the change */
+ oldcontext = MemoryContextSwitchTo(rb->context);
+
+ change = ReorderBufferGetChange(rb);
+
+ change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
+ change->origin_id = origin_id;
+
+ memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
+
+ change->data.sequence.tuple = tuplebuf;
+
+ /* add it to the same subxact that created the sequence */
+ ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+ else
+ {
+ /*
+ * This increment is for a sequence that was not created in any
+ * running transaction, so we treat it as non-transactional and
+ * just send it to the output plugin directly.
+ */
+ ReorderBufferTXN *txn = NULL;
+ volatile Snapshot snapshot_now = snapshot;
+ bool using_subtxn;
+
+#ifdef USE_ASSERT_CHECKING
+ /* All "creates" have to be handled as transactional. */
+ Assert(!created);
+
+ /* Make sure the sequence is not in the hash table. */
+ {
+ bool found;
+ hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND, &found);
+ Assert(!found);
+ }
+#endif
+
+ if (xid != InvalidTransactionId)
+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
+
+ /* setup snapshot to allow catalog access */
+ SetupHistoricSnapshot(snapshot_now, NULL);
+
+ /*
+ * Decoding needs access to syscaches et al., which in turn use
+ * heavyweight locks and such. Thus we need to have enough state around to
+ * keep track of those. The easiest way is to simply use a transaction
+ * internally. That also allows us to easily enforce that nothing writes
+ * to the database by checking for xid assignments.
+ *
+ * When we're called via the SQL SRF there's already a transaction
+ * started, so start an explicit subtransaction there.
+ */
+ using_subtxn = IsTransactionOrTransactionBlock();
+
+ PG_TRY();
+ {
+ Relation relation;
+ HeapTuple tuple;
+ Form_pg_sequence_data seq;
+ Oid reloid;
+
+ if (using_subtxn)
+ BeginInternalSubTransaction("sequence");
+ else
+ StartTransactionCommand();
+
+ reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(rnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+ tuple = &tuplebuf->tuple;
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+
+ rb->sequence(rb, txn, lsn, relation, transactional,
+ seq->last_value, seq->log_cnt, seq->is_called);
+
+ RelationClose(relation);
+
+ TeardownHistoricSnapshot(false);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+ }
+ PG_CATCH();
+ {
+ TeardownHistoricSnapshot(true);
+
+ AbortCurrentTransaction();
+
+ if (using_subtxn)
+ RollbackAndReleaseCurrentSubTransaction();
+
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+ }
+}
+
/*
* AssertTXNLsnOrder
* Verify LSN ordering of transaction lists in the reorderbuffer
@@ -1535,6 +1811,9 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
&found);
Assert(found);
+ /* Remove sequences created in this transaction (if any). */
+ ReorderBufferSequenceCleanup(rb, txn->xid);
+
/* remove entries spilled to disk */
if (rbtxn_is_serialized(txn))
ReorderBufferRestoreCleanup(rb, txn);
@@ -1950,6 +2229,29 @@ ReorderBufferApplyMessage(ReorderBuffer *rb, ReorderBufferTXN *txn,
change->data.msg.message);
}
+/*
+ * Helper function for ReorderBufferProcessTXN for applying sequences.
+ */
+static inline void
+ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ Relation relation, ReorderBufferChange *change,
+ bool streaming)
+{
+ HeapTuple tuple;
+ Form_pg_sequence_data seq;
+
+ tuple = &change->data.sequence.tuple->tuple;
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+
+ /* Only ever called from ReorderBufferApplySequence, so transational. */
+ if (streaming)
+ rb->stream_sequence(rb, txn, change->lsn, relation, true,
+ seq->last_value, seq->log_cnt, seq->is_called);
+ else
+ rb->sequence(rb, txn, change->lsn, relation, true,
+ seq->last_value, seq->log_cnt, seq->is_called);
+}
+
/*
* Function to store the command id and snapshot at the end of the current
* stream so that we can reuse the same while sending the next stream.
@@ -2392,6 +2694,31 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
elog(ERROR, "tuplecid value in changequeue");
break;
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ Assert(snapshot_now);
+
+ reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
+ change->data.sequence.relnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+
+ if (!RelationIsValid(relation))
+ elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
+ reloid,
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ if (RelationIsLogicallyLogged(relation))
+ ReorderBufferApplySequence(rb, txn, relation, change, streaming);
+
+ RelationClose(relation);
+ break;
}
}
@@ -3776,6 +4103,39 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ char *data;
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
+ /* make sure we have enough space */
+ ReorderBufferSerializeReserve(rb, sz);
+
+ data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
+ /* might have been reallocated above */
+ ondisk = (ReorderBufferDiskChange *) rb->outbuf;
+
+ if (len)
+ {
+ memcpy(data, &tup->tuple, sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ memcpy(data, tup->tuple.t_data, len);
+ data += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4040,6 +4400,22 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
+ break;
+ }
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ {
+ ReorderBufferTupleBuf *tup;
+ Size len = 0;
+
+ tup = change->data.sequence.tuple;
+
+ if (tup)
+ {
+ sz += sizeof(HeapTupleData);
+ len = tup->tuple.t_len;
+ sz += len;
+ }
+
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4341,6 +4717,30 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ if (change->data.sequence.tuple)
+ {
+ uint32 tuplelen = ((HeapTuple) data)->t_len;
+
+ change->data.sequence.tuple =
+ ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
+
+ /* restore ->tuple */
+ memcpy(&change->data.sequence.tuple->tuple, data,
+ sizeof(HeapTupleData));
+ data += sizeof(HeapTupleData);
+
+ /* reset t_data pointer into the new tuplebuf */
+ change->data.sequence.tuple->tuple.t_data =
+ ReorderBufferTupleBufData(change->data.sequence.tuple);
+
+ /* restore tuple data itself */
+ memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
+ data += tuplelen;
+ }
+ break;
+
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
diff --git a/src/include/access/rmgrlist.h b/src/include/access/rmgrlist.h
index 9a74721c97c..cf8b6d48193 100644
--- a/src/include/access/rmgrlist.h
+++ b/src/include/access/rmgrlist.h
@@ -40,7 +40,7 @@ PG_RMGR(RM_BTREE_ID, "Btree", btree_redo, btree_desc, btree_identify, btree_xlog
PG_RMGR(RM_HASH_ID, "Hash", hash_redo, hash_desc, hash_identify, NULL, NULL, hash_mask, NULL)
PG_RMGR(RM_GIN_ID, "Gin", gin_redo, gin_desc, gin_identify, gin_xlog_startup, gin_xlog_cleanup, gin_mask, NULL)
PG_RMGR(RM_GIST_ID, "Gist", gist_redo, gist_desc, gist_identify, gist_xlog_startup, gist_xlog_cleanup, gist_mask, NULL)
-PG_RMGR(RM_SEQ_ID, "Sequence", seq_redo, seq_desc, seq_identify, NULL, NULL, seq_mask, NULL)
+PG_RMGR(RM_SEQ_ID, "Sequence", seq_redo, seq_desc, seq_identify, NULL, NULL, seq_mask, sequence_decode)
PG_RMGR(RM_SPGIST_ID, "SPGist", spg_redo, spg_desc, spg_identify, spg_xlog_startup, spg_xlog_cleanup, spg_mask, NULL)
PG_RMGR(RM_BRIN_ID, "BRIN", brin_redo, brin_desc, brin_identify, NULL, NULL, brin_mask, NULL)
PG_RMGR(RM_COMMIT_TS_ID, "CommitTs", commit_ts_redo, commit_ts_desc, commit_ts_identify, NULL, NULL, NULL, NULL)
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index f7542f2bccb..b9015a6f710 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -48,6 +48,7 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
typedef struct xl_seq_rec
{
RelFileNode node;
+ bool created; /* is this a CREATE SEQUENCE */
/* SEQUENCE TUPLE DATA FOLLOWS AT THE END */
} xl_seq_rec;
diff --git a/src/include/replication/decode.h b/src/include/replication/decode.h
index a33c2a718a7..8e07bb7409a 100644
--- a/src/include/replication/decode.h
+++ b/src/include/replication/decode.h
@@ -27,6 +27,7 @@ extern void heap2_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
extern void xact_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
extern void standby_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
extern void logicalmsg_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
+extern void sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
extern void LogicalDecodingProcessRecord(LogicalDecodingContext *ctx,
XLogReaderState *record);
diff --git a/src/include/replication/output_plugin.h b/src/include/replication/output_plugin.h
index 41157fda7cc..a16bebf76ca 100644
--- a/src/include/replication/output_plugin.h
+++ b/src/include/replication/output_plugin.h
@@ -88,6 +88,18 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
Size message_size,
const char *message);
+/*
+ * Called for the generic logical decoding sequences.
+ */
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Filter changes by origin.
*/
@@ -199,6 +211,19 @@ typedef void (*LogicalDecodeStreamMessageCB) (struct LogicalDecodingContext *ctx
Size message_size,
const char *message);
+/*
+ * Called for the streaming generic logical decoding sequences from in-progress
+ * transactions.
+ */
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Callback for streaming truncates from in-progress transactions.
*/
@@ -219,6 +244,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
@@ -237,6 +263,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index aa0a73382f6..859424bbd9b 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -64,7 +64,8 @@ enum ReorderBufferChangeType
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT,
- REORDER_BUFFER_CHANGE_TRUNCATE
+ REORDER_BUFFER_CHANGE_TRUNCATE,
+ REORDER_BUFFER_CHANGE_SEQUENCE
};
/* forward declaration */
@@ -158,6 +159,13 @@ typedef struct ReorderBufferChange
uint32 ninvalidations; /* Number of messages */
SharedInvalidationMessage *invalidations; /* invalidation message */
} inval;
+
+ /* Context data for Sequence changes */
+ struct
+ {
+ RelFileNode relnode;
+ ReorderBufferTupleBuf *tuple;
+ } sequence;
} data;
/*
@@ -430,6 +438,15 @@ typedef void (*ReorderBufferMessageCB) (ReorderBuffer *rb,
const char *prefix, Size sz,
const char *message);
+/* sequence callback signature */
+typedef void (*ReorderBufferSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* begin prepare callback signature */
typedef void (*ReorderBufferBeginPrepareCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn);
@@ -496,6 +513,15 @@ typedef void (*ReorderBufferStreamMessageCB) (
const char *prefix, Size sz,
const char *message);
+/* stream sequence callback signature */
+typedef void (*ReorderBufferStreamSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* stream truncate callback signature */
typedef void (*ReorderBufferStreamTruncateCB) (
ReorderBuffer *rb,
@@ -511,6 +537,12 @@ struct ReorderBuffer
*/
HTAB *by_txn;
+ /*
+ * relfilenode => XID lookup table for sequences created in a transaction
+ * (also includes altered sequences, which assigns new relfilenode)
+ */
+ HTAB *sequences;
+
/*
* Transactions that could be a toplevel xact, ordered by LSN of the first
* record bearing that xid.
@@ -541,6 +573,7 @@ struct ReorderBuffer
ReorderBufferApplyTruncateCB apply_truncate;
ReorderBufferCommitCB commit;
ReorderBufferMessageCB message;
+ ReorderBufferSequenceCB sequence;
/*
* Callbacks to be called when streaming a transaction at prepare time.
@@ -560,6 +593,7 @@ struct ReorderBuffer
ReorderBufferStreamCommitCB stream_commit;
ReorderBufferStreamChangeCB stream_change;
ReorderBufferStreamMessageCB stream_message;
+ ReorderBufferStreamSequenceCB stream_sequence;
ReorderBufferStreamTruncateCB stream_truncate;
/*
@@ -635,6 +669,10 @@ void ReorderBufferQueueChange(ReorderBuffer *, TransactionId,
void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapshot, XLogRecPtr lsn,
bool transactional, const char *prefix,
Size message_size, const char *message);
+void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool transactional, bool created,
+ ReorderBufferTupleBuf *tuplebuf);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
@@ -682,4 +720,7 @@ void ReorderBufferSetRestartPoint(ReorderBuffer *, XLogRecPtr ptr);
void StartupReorderBuffer(void);
+bool ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
+ RelFileNode rnode, bool created);
+
#endif
--
2.31.1
0002-Add-support-for-decoding-sequences-to-test_-20220128.patchtext/x-patch; charset=UTF-8; name=0002-Add-support-for-decoding-sequences-to-test_-20220128.patchDownload
From 4a510686464d8f4b3fa54576ff6a1ddbccd3bef8 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 24 Sep 2021 00:42:04 +0200
Subject: [PATCH 2/3] Add support for decoding sequences to test_decoding
---
contrib/test_decoding/Makefile | 3 +-
contrib/test_decoding/expected/ddl.out | 12 +-
.../expected/decoding_in_xact.out | 2 +-
.../expected/decoding_into_rel.out | 10 +-
contrib/test_decoding/expected/mxact.out | 8 +-
.../test_decoding/expected/ondisk_startup.out | 4 +-
contrib/test_decoding/expected/replorigin.out | 4 +-
contrib/test_decoding/expected/rewrite.out | 4 +-
contrib/test_decoding/expected/sequence.out | 327 ++++++++++++++++++
contrib/test_decoding/expected/slot.out | 2 +-
contrib/test_decoding/expected/toast.out | 10 +-
contrib/test_decoding/expected/truncate.out | 2 +-
contrib/test_decoding/specs/mxact.spec | 2 +-
.../test_decoding/specs/ondisk_startup.spec | 2 +-
contrib/test_decoding/sql/ddl.sql | 12 +-
.../test_decoding/sql/decoding_in_xact.sql | 2 +-
.../test_decoding/sql/decoding_into_rel.sql | 10 +-
contrib/test_decoding/sql/replorigin.sql | 4 +-
contrib/test_decoding/sql/rewrite.sql | 4 +-
contrib/test_decoding/sql/sequence.sql | 119 +++++++
contrib/test_decoding/sql/slot.sql | 2 +-
contrib/test_decoding/sql/toast.sql | 10 +-
contrib/test_decoding/sql/truncate.sql | 2 +-
contrib/test_decoding/test_decoding.c | 67 ++++
24 files changed, 569 insertions(+), 55 deletions(-)
create mode 100644 contrib/test_decoding/expected/sequence.out
create mode 100644 contrib/test_decoding/sql/sequence.sql
diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index 9a31e0b8795..56ddc3abaeb 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -5,7 +5,8 @@ PGFILEDESC = "test_decoding - example of a logical decoding output plugin"
REGRESS = ddl xact rewrite toast permissions decoding_in_xact \
decoding_into_rel binary prepared replorigin time messages \
- spill slot truncate stream stats twophase twophase_stream
+ spill slot truncate stream stats twophase twophase_stream \
+ sequence
ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \
oldest_xmin snapshot_transfer subxact_without_top concurrent_stream \
twophase_snapshot
diff --git a/contrib/test_decoding/expected/ddl.out b/contrib/test_decoding/expected/ddl.out
index 4ff0044c787..82898201ca0 100644
--- a/contrib/test_decoding/expected/ddl.out
+++ b/contrib/test_decoding/expected/ddl.out
@@ -89,7 +89,7 @@ COMMIT;
ALTER TABLE replication_example RENAME COLUMN text TO somenum;
INSERT INTO replication_example(somedata, somenum) VALUES (4, 1);
-- collect all changes
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
---------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -196,7 +196,7 @@ CREATE TABLE tr_unique(id2 serial unique NOT NULL, data int);
INSERT INTO tr_unique(data) VALUES(10);
ALTER TABLE tr_unique RENAME TO tr_pkey;
ALTER TABLE tr_pkey ADD COLUMN id serial primary key;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-rewrites', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-rewrites', '1', 'include-sequences', '0');
data
-----------------------------------------------------------------------------
BEGIN
@@ -242,7 +242,7 @@ INSERT INTO tr_oddlength VALUES('ab', 'foo');
COMMIT;
/* display results, but hide most of the output */
SELECT count(*), min(data), max(data)
-FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0')
GROUP BY substring(data, 1, 24)
ORDER BY 1,2;
count | min | max
@@ -309,7 +309,7 @@ RELEASE SAVEPOINT c;
INSERT INTO tr_sub(path) VALUES ('1-top-2-#1');
RELEASE SAVEPOINT b;
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
----------------------------------------------------------------------
BEGIN
@@ -496,7 +496,7 @@ Options: user_catalog_table=false
INSERT INTO replication_metadata(relation, options)
VALUES ('zaphod', NULL);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -613,7 +613,7 @@ INSERT INTO toasttable(toasted_col2) SELECT repeat(string_agg(to_char(g.i, 'FM00
UPDATE toasttable
SET toasted_col1 = (SELECT string_agg(g.i::text, '') FROM generate_series(1, 2000) g(i))
WHERE id = 1;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/decoding_in_xact.out b/contrib/test_decoding/expected/decoding_in_xact.out
index b65253f4630..6e97b6e34bc 100644
--- a/contrib/test_decoding/expected/decoding_in_xact.out
+++ b/contrib/test_decoding/expected/decoding_in_xact.out
@@ -58,7 +58,7 @@ SELECT pg_current_xact_id() = '0';
-- don't show yet, haven't committed
INSERT INTO nobarf(data) VALUES('2');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
-----------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/decoding_into_rel.out b/contrib/test_decoding/expected/decoding_into_rel.out
index 8fd3390066d..03966b8b1ca 100644
--- a/contrib/test_decoding/expected/decoding_into_rel.out
+++ b/contrib/test_decoding/expected/decoding_into_rel.out
@@ -19,7 +19,7 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'inc
CREATE TABLE somechange(id serial primary key);
INSERT INTO somechange DEFAULT VALUES;
CREATE TABLE changeresult AS
- SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT * FROM changeresult;
data
------------------------------------------------
@@ -29,9 +29,9 @@ SELECT * FROM changeresult;
(3 rows)
INSERT INTO changeresult
- SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
INSERT INTO changeresult
- SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT * FROM changeresult;
data
--------------------------------------------------------------------------------------------------------------------------------------------------
@@ -63,7 +63,7 @@ DROP TABLE somechange;
CREATE FUNCTION slot_changes_wrapper(slot_name name) RETURNS SETOF TEXT AS $$
BEGIN
RETURN QUERY
- SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
END$$ LANGUAGE plpgsql;
SELECT * FROM slot_changes_wrapper('regression_slot');
slot_changes_wrapper
@@ -84,7 +84,7 @@ SELECT * FROM slot_changes_wrapper('regression_slot');
COMMIT
(14 rows)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/mxact.out b/contrib/test_decoding/expected/mxact.out
index 03ad3df0999..c5bc26c8044 100644
--- a/contrib/test_decoding/expected/mxact.out
+++ b/contrib/test_decoding/expected/mxact.out
@@ -7,7 +7,7 @@ step s0init: SELECT 'init' FROM pg_create_logical_replication_slot('isolation_sl
init
(1 row)
-step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
----
(0 rows)
@@ -27,7 +27,7 @@ t
(1 row)
step s0w: INSERT INTO do_write DEFAULT VALUES;
-step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
--------------------------------------------
BEGIN
@@ -50,7 +50,7 @@ step s0init: SELECT 'init' FROM pg_create_logical_replication_slot('isolation_sl
init
(1 row)
-step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
----
(0 rows)
@@ -71,7 +71,7 @@ t
step s0alter: ALTER TABLE do_write ADD column ts timestamptz;
step s0w: INSERT INTO do_write DEFAULT VALUES;
-step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/ondisk_startup.out b/contrib/test_decoding/expected/ondisk_startup.out
index bc7ff071648..3d2fa4a92e0 100644
--- a/contrib/test_decoding/expected/ondisk_startup.out
+++ b/contrib/test_decoding/expected/ondisk_startup.out
@@ -35,7 +35,7 @@ init
step s2c: COMMIT;
step s1insert: INSERT INTO do_write DEFAULT VALUES;
step s1checkpoint: CHECKPOINT;
-step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
--------------------------------------------------------------------
BEGIN
@@ -46,7 +46,7 @@ COMMIT
step s1insert: INSERT INTO do_write DEFAULT VALUES;
step s1alter: ALTER TABLE do_write ADD COLUMN addedbys1 int;
step s1insert: INSERT INTO do_write DEFAULT VALUES;
-step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
--------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/replorigin.out b/contrib/test_decoding/expected/replorigin.out
index 2e9ef7c823b..fb96aa31727 100644
--- a/contrib/test_decoding/expected/replorigin.out
+++ b/contrib/test_decoding/expected/replorigin.out
@@ -72,9 +72,9 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_d
-- origin tx
INSERT INTO origin_tbl(data) VALUES ('will be replicated and decoded and decoded again');
INSERT INTO target_tbl(data)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- as is normal, the insert into target_tbl shows up
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/rewrite.out b/contrib/test_decoding/expected/rewrite.out
index b30999c436b..0b5eade41fa 100644
--- a/contrib/test_decoding/expected/rewrite.out
+++ b/contrib/test_decoding/expected/rewrite.out
@@ -64,7 +64,7 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_d
CREATE TABLE replication_example(id SERIAL PRIMARY KEY, somedata int, text varchar(120));
INSERT INTO replication_example(somedata) VALUES (1);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
----------------------------------------------------------------------------------------------------------
BEGIN
@@ -141,7 +141,7 @@ VACUUM FULL pg_proc; VACUUM FULL pg_description; VACUUM FULL pg_shdescription; V
INSERT INTO replication_example(somedata, testcolumn1, testcolumn3) VALUES (8, 6, 1);
VACUUM FULL pg_proc; VACUUM FULL pg_description; VACUUM FULL pg_shdescription; VACUUM FULL iamalargetable;
INSERT INTO replication_example(somedata, testcolumn1, testcolumn3) VALUES (9, 7, 1);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
new file mode 100644
index 00000000000..cf21f3a1f4f
--- /dev/null
+++ b/contrib/test_decoding/expected/sequence.out
@@ -0,0 +1,327 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+ ?column?
+----------
+ init
+(1 row)
+
+CREATE SEQUENCE test_sequence;
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4
+(1 row)
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 14
+(1 row)
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4000
+(1 row)
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, true);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, false);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3500
+(1 row)
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
+ data
+----------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 1 log_cnt: 0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value: 33 log_cnt: 0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 4 log_cnt: 0 is_called:1
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value: 334 log_cnt: 0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 14 log_cnt: 32 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 14 log_cnt: 0 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 4000 log_cnt: 0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value: 4320 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value: 3500 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value: 3830 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value: 3500 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value: 3830 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value: 3500 log_cnt: 0 is_called:0
+ sequence public.test_sequence: transactional:0 last_value: 3820 log_cnt: 0 is_called:1
+(24 rows)
+
+DROP SEQUENCE test_sequence;
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3002
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3003
+(1 row)
+
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 6001
+(1 row)
+
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ data
+------
+(0 rows)
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ data
+------
+(0 rows)
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3003 | 30 | t
+(1 row)
+
+SELECT nextval('test_table_a_seq');
+ nextval
+---------
+ 3004
+(1 row)
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
+ data
+-------------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 last_value: 1 log_cnt: 0 is_called:0
+ COMMIT
+ sequence public.test_table_a_seq: transactional:0 last_value: 33 log_cnt: 0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value: 3000 log_cnt: 0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value: 3033 log_cnt: 0 is_called:1
+ BEGIN
+ COMMIT
+(8 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
+ data
+-----------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 last_value: 1 log_cnt: 0 is_called:0
+ sequence public.test_table_a_seq: transactional:1 last_value: 33 log_cnt: 0 is_called:1
+ table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ COMMIT
+(6 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ setval
+--------
+ 5000
+(1 row)
+
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3001 | 32 | t
+(1 row)
+
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
+ data
+----------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 1 log_cnt: 0 is_called:0
+ sequence public.test_sequence: transactional:1 last_value: 33 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:1 last_value: 3000 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:1 last_value: 3033 log_cnt: 0 is_called:1
+ COMMIT
+(6 rows)
+
+SELECT pg_drop_replication_slot('regression_slot');
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
diff --git a/contrib/test_decoding/expected/slot.out b/contrib/test_decoding/expected/slot.out
index 63a9940f73a..93d7b95d47a 100644
--- a/contrib/test_decoding/expected/slot.out
+++ b/contrib/test_decoding/expected/slot.out
@@ -95,7 +95,7 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot2', 'test_
(1 row)
INSERT INTO replication_example(somedata, text) VALUES (1, 3);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
---------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/toast.out b/contrib/test_decoding/expected/toast.out
index cd03e9d50a1..a060e3b5e52 100644
--- a/contrib/test_decoding/expected/toast.out
+++ b/contrib/test_decoding/expected/toast.out
@@ -52,7 +52,7 @@ CREATE TABLE toasted_copy (
);
ALTER TABLE toasted_copy ALTER COLUMN data SET STORAGE EXTERNAL;
\copy toasted_copy FROM STDIN
-SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
substr
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -316,7 +316,7 @@ SELECT pg_column_size(toasted_key) > 2^16 FROM toasted_several;
t
(1 row)
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
regexp_replace
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -327,7 +327,7 @@ SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_sl
-- test update of a toasted key without changing it
UPDATE toasted_several SET toasted_col1 = toasted_key;
UPDATE toasted_several SET toasted_col2 = toasted_col1;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
regexp_replace
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -350,7 +350,7 @@ UPDATE toasted_several SET toasted_col1 = toasted_col2 WHERE id = 1;
DELETE FROM toasted_several WHERE id = 1;
COMMIT;
DROP TABLE toasted_several;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0')
WHERE data NOT LIKE '%INSERT: %';
regexp_replace
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
@@ -373,7 +373,7 @@ INSERT INTO tbl1 VALUES(1, repeat('a', 4000)) ;
ALTER TABLE tbl1 ADD COLUMN id serial primary key;
INSERT INTO tbl2 VALUES(1);
commit;
-SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
substr
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/truncate.out b/contrib/test_decoding/expected/truncate.out
index e64d377214a..961b0ebdc89 100644
--- a/contrib/test_decoding/expected/truncate.out
+++ b/contrib/test_decoding/expected/truncate.out
@@ -11,7 +11,7 @@ CREATE TABLE tab2 (a int primary key, b int);
TRUNCATE tab1;
TRUNCATE tab1, tab1 RESTART IDENTITY CASCADE;
TRUNCATE tab1, tab2;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/specs/mxact.spec b/contrib/test_decoding/specs/mxact.spec
index ea5b1aa2d67..19f3af14b30 100644
--- a/contrib/test_decoding/specs/mxact.spec
+++ b/contrib/test_decoding/specs/mxact.spec
@@ -13,7 +13,7 @@ teardown
session "s0"
setup { SET synchronous_commit=on; }
step "s0init" {SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot', 'test_decoding');}
-step "s0start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');}
+step "s0start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');}
step "s0alter" {ALTER TABLE do_write ADD column ts timestamptz; }
step "s0w" { INSERT INTO do_write DEFAULT VALUES; }
diff --git a/contrib/test_decoding/specs/ondisk_startup.spec b/contrib/test_decoding/specs/ondisk_startup.spec
index 96ce87f1a45..fb43cbc5e15 100644
--- a/contrib/test_decoding/specs/ondisk_startup.spec
+++ b/contrib/test_decoding/specs/ondisk_startup.spec
@@ -16,7 +16,7 @@ session "s1"
setup { SET synchronous_commit=on; }
step "s1init" {SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot', 'test_decoding');}
-step "s1start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');}
+step "s1start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');}
step "s1insert" { INSERT INTO do_write DEFAULT VALUES; }
step "s1checkpoint" { CHECKPOINT; }
step "s1alter" { ALTER TABLE do_write ADD COLUMN addedbys1 int; }
diff --git a/contrib/test_decoding/sql/ddl.sql b/contrib/test_decoding/sql/ddl.sql
index 1b3866d0153..f677460d349 100644
--- a/contrib/test_decoding/sql/ddl.sql
+++ b/contrib/test_decoding/sql/ddl.sql
@@ -64,7 +64,7 @@ ALTER TABLE replication_example RENAME COLUMN text TO somenum;
INSERT INTO replication_example(somedata, somenum) VALUES (4, 1);
-- collect all changes
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
ALTER TABLE replication_example ALTER COLUMN somenum TYPE int4 USING (somenum::int4);
-- check that this doesn't produce any changes from the heap rewrite
@@ -97,7 +97,7 @@ CREATE TABLE tr_unique(id2 serial unique NOT NULL, data int);
INSERT INTO tr_unique(data) VALUES(10);
ALTER TABLE tr_unique RENAME TO tr_pkey;
ALTER TABLE tr_pkey ADD COLUMN id serial primary key;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-rewrites', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-rewrites', '1', 'include-sequences', '0');
INSERT INTO tr_pkey(data) VALUES(1);
--show deletion with primary key
@@ -121,7 +121,7 @@ COMMIT;
/* display results, but hide most of the output */
SELECT count(*), min(data), max(data)
-FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0')
GROUP BY substring(data, 1, 24)
ORDER BY 1,2;
@@ -173,7 +173,7 @@ INSERT INTO tr_sub(path) VALUES ('1-top-2-#1');
RELEASE SAVEPOINT b;
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- check that we handle xlog assignments correctly
BEGIN;
@@ -287,7 +287,7 @@ ALTER TABLE replication_metadata SET (user_catalog_table = false);
INSERT INTO replication_metadata(relation, options)
VALUES ('zaphod', NULL);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
/*
* check whether we handle updates/deletes correct with & without a pkey
@@ -398,7 +398,7 @@ UPDATE toasttable
SET toasted_col1 = (SELECT string_agg(g.i::text, '') FROM generate_series(1, 2000) g(i))
WHERE id = 1;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
INSERT INTO toasttable(toasted_col1) SELECT string_agg(g.i::text, '') FROM generate_series(1, 2000) g(i);
diff --git a/contrib/test_decoding/sql/decoding_in_xact.sql b/contrib/test_decoding/sql/decoding_in_xact.sql
index 108782dc2e9..33a9c4a6c77 100644
--- a/contrib/test_decoding/sql/decoding_in_xact.sql
+++ b/contrib/test_decoding/sql/decoding_in_xact.sql
@@ -32,7 +32,7 @@ BEGIN;
SELECT pg_current_xact_id() = '0';
-- don't show yet, haven't committed
INSERT INTO nobarf(data) VALUES('2');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
COMMIT;
INSERT INTO nobarf(data) VALUES('3');
diff --git a/contrib/test_decoding/sql/decoding_into_rel.sql b/contrib/test_decoding/sql/decoding_into_rel.sql
index 1068cec5888..27d5d08879e 100644
--- a/contrib/test_decoding/sql/decoding_into_rel.sql
+++ b/contrib/test_decoding/sql/decoding_into_rel.sql
@@ -15,14 +15,14 @@ CREATE TABLE somechange(id serial primary key);
INSERT INTO somechange DEFAULT VALUES;
CREATE TABLE changeresult AS
- SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT * FROM changeresult;
INSERT INTO changeresult
- SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
INSERT INTO changeresult
- SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT * FROM changeresult;
DROP TABLE changeresult;
@@ -32,11 +32,11 @@ DROP TABLE somechange;
CREATE FUNCTION slot_changes_wrapper(slot_name name) RETURNS SETOF TEXT AS $$
BEGIN
RETURN QUERY
- SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
END$$ LANGUAGE plpgsql;
SELECT * FROM slot_changes_wrapper('regression_slot');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT 'stop' FROM pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/sql/replorigin.sql b/contrib/test_decoding/sql/replorigin.sql
index 2e28a487773..f0f4dd49642 100644
--- a/contrib/test_decoding/sql/replorigin.sql
+++ b/contrib/test_decoding/sql/replorigin.sql
@@ -41,10 +41,10 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_d
-- origin tx
INSERT INTO origin_tbl(data) VALUES ('will be replicated and decoded and decoded again');
INSERT INTO target_tbl(data)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- as is normal, the insert into target_tbl shows up
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
INSERT INTO origin_tbl(data) VALUES ('will be replicated, but not decoded again');
diff --git a/contrib/test_decoding/sql/rewrite.sql b/contrib/test_decoding/sql/rewrite.sql
index 62dead3a9b1..945c39eb416 100644
--- a/contrib/test_decoding/sql/rewrite.sql
+++ b/contrib/test_decoding/sql/rewrite.sql
@@ -35,7 +35,7 @@ SELECT pg_relation_size((SELECT reltoastrelid FROM pg_class WHERE oid = 'pg_shde
SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
CREATE TABLE replication_example(id SERIAL PRIMARY KEY, somedata int, text varchar(120));
INSERT INTO replication_example(somedata) VALUES (1);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
BEGIN;
INSERT INTO replication_example(somedata) VALUES (2);
@@ -98,7 +98,7 @@ VACUUM FULL pg_proc; VACUUM FULL pg_description; VACUUM FULL pg_shdescription; V
INSERT INTO replication_example(somedata, testcolumn1, testcolumn3) VALUES (8, 6, 1);
VACUUM FULL pg_proc; VACUUM FULL pg_description; VACUUM FULL pg_shdescription; VACUUM FULL iamalargetable;
INSERT INTO replication_example(somedata, testcolumn1, testcolumn3) VALUES (9, 7, 1);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT pg_drop_replication_slot('regression_slot');
DROP TABLE IF EXISTS replication_example;
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
new file mode 100644
index 00000000000..42ad9113bf4
--- /dev/null
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -0,0 +1,119 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+
+CREATE SEQUENCE test_sequence;
+
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, true);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, false);
+SELECT nextval('test_sequence');
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
+DROP SEQUENCE test_sequence;
+
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+SELECT nextval('test_table_a_seq');
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
+
+SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/sql/slot.sql b/contrib/test_decoding/sql/slot.sql
index 1aa27c56674..70ea1603e67 100644
--- a/contrib/test_decoding/sql/slot.sql
+++ b/contrib/test_decoding/sql/slot.sql
@@ -50,7 +50,7 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot2', 'test_
INSERT INTO replication_example(somedata, text) VALUES (1, 3);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
INSERT INTO replication_example(somedata, text) VALUES (1, 4);
diff --git a/contrib/test_decoding/sql/toast.sql b/contrib/test_decoding/sql/toast.sql
index d1c560a174d..f5d4fc082ed 100644
--- a/contrib/test_decoding/sql/toast.sql
+++ b/contrib/test_decoding/sql/toast.sql
@@ -264,7 +264,7 @@ ALTER TABLE toasted_copy ALTER COLUMN data SET STORAGE EXTERNAL;
202 untoasted199
203 untoasted200
\.
-SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- test we can decode "old" tuples bigger than the max heap tuple size correctly
DROP TABLE IF EXISTS toasted_several;
@@ -287,13 +287,13 @@ UPDATE pg_attribute SET attstorage = 'x' WHERE attrelid = 'toasted_several_pkey'
INSERT INTO toasted_several(toasted_key) VALUES(repeat('9876543210', 10000));
SELECT pg_column_size(toasted_key) > 2^16 FROM toasted_several;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- test update of a toasted key without changing it
UPDATE toasted_several SET toasted_col1 = toasted_key;
UPDATE toasted_several SET toasted_col2 = toasted_col1;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
/*
* update with large tuplebuf, in a transaction large enough to force to spool to disk
@@ -306,7 +306,7 @@ COMMIT;
DROP TABLE toasted_several;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0')
WHERE data NOT LIKE '%INSERT: %';
/*
@@ -322,6 +322,6 @@ INSERT INTO tbl1 VALUES(1, repeat('a', 4000)) ;
ALTER TABLE tbl1 ADD COLUMN id serial primary key;
INSERT INTO tbl2 VALUES(1);
commit;
-SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/sql/truncate.sql b/contrib/test_decoding/sql/truncate.sql
index 5633854e0df..b0f36fd681b 100644
--- a/contrib/test_decoding/sql/truncate.sql
+++ b/contrib/test_decoding/sql/truncate.sql
@@ -10,5 +10,5 @@ TRUNCATE tab1;
TRUNCATE tab1, tab1 RESTART IDENTITY CASCADE;
TRUNCATE tab1, tab2;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index 42fe91a2f9c..526080f3bbb 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -35,6 +35,7 @@ typedef struct
bool include_timestamp;
bool skip_empty_xacts;
bool only_local;
+ bool include_sequences;
} TestDecodingData;
/*
@@ -76,6 +77,10 @@ static void pg_decode_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
TransactionId xid,
const char *gid);
@@ -116,6 +121,10 @@ static void pg_decode_stream_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_stream_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static void pg_decode_stream_truncate(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
int nrelations, Relation relations[],
@@ -141,6 +150,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->filter_by_origin_cb = pg_decode_filter;
cb->shutdown_cb = pg_decode_shutdown;
cb->message_cb = pg_decode_message;
+ cb->sequence_cb = pg_decode_sequence;
cb->filter_prepare_cb = pg_decode_filter_prepare;
cb->begin_prepare_cb = pg_decode_begin_prepare_txn;
cb->prepare_cb = pg_decode_prepare_txn;
@@ -153,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pg_decode_stream_commit;
cb->stream_change_cb = pg_decode_stream_change;
cb->stream_message_cb = pg_decode_stream_message;
+ cb->stream_sequence_cb = pg_decode_stream_sequence;
cb->stream_truncate_cb = pg_decode_stream_truncate;
}
@@ -173,6 +184,7 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
data->include_xids = true;
data->include_timestamp = false;
data->skip_empty_xacts = false;
+ data->include_sequences = true;
data->only_local = false;
ctx->output_plugin_private = data;
@@ -265,6 +277,17 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "include-sequences") == 0)
+ {
+
+ if (elem->arg == NULL)
+ data->include_sequences = false;
+ else if (!parse_bool(strVal(elem->arg), &data->include_sequences))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not parse value \"%s\" for parameter \"%s\"",
+ strVal(elem->arg), elem->defname)));
+ }
else
{
ereport(ERROR,
@@ -744,6 +767,28 @@ pg_decode_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip sequences */
+ if (!data->include_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d last_value: " INT64_FORMAT " log_cnt: " INT64_FORMAT " is_called:%d",
+ transactional, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
static void
pg_decode_stream_start(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn)
@@ -943,6 +988,28 @@ pg_decode_stream_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_stream_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip sequences */
+ if (!data->include_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "streaming sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d last_value: " INT64_FORMAT " log_cnt: " INT64_FORMAT " is_called:%d",
+ transactional, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* In streaming mode, we don't display the detailed information of Truncate.
* See pg_decode_stream_change.
--
2.31.1
0003-Add-support-for-decoding-sequences-to-built-20220128.patchtext/x-patch; charset=UTF-8; name=0003-Add-support-for-decoding-sequences-to-built-20220128.patchDownload
From 32019b209b65a50cc74dad95253f379406fa3f33 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Wed, 26 Jan 2022 02:48:21 +0100
Subject: [PATCH 3/3] Add support for decoding sequences to built-in
replication
---
doc/src/sgml/catalogs.sgml | 71 ++++
doc/src/sgml/ref/alter_publication.sgml | 24 +-
doc/src/sgml/ref/alter_subscription.sgml | 4 +-
src/backend/catalog/pg_publication.c | 149 ++++++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 350 +++++++++++++++++++-
src/backend/commands/sequence.c | 79 +++++
src/backend/commands/subscriptioncmds.c | 272 +++++++++++++++
src/backend/executor/execReplication.c | 2 +-
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 1 +
src/backend/parser/gram.y | 32 ++
src/backend/replication/logical/proto.c | 52 +++
src/backend/replication/logical/tablesync.c | 118 ++++++-
src/backend/replication/logical/worker.c | 60 ++++
src/backend/replication/pgoutput/pgoutput.c | 85 ++++-
src/backend/utils/cache/relcache.c | 4 +-
src/bin/psql/tab-complete.c | 6 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 14 +
src/include/commands/sequence.h | 1 +
src/include/nodes/parsenodes.h | 6 +
src/include/replication/logicalproto.h | 19 ++
src/include/replication/pgoutput.h | 1 +
src/test/regress/expected/rules.out | 8 +
src/test/subscription/t/027_sequences.pl | 196 +++++++++++
26 files changed, 1538 insertions(+), 32 deletions(-)
create mode 100644 src/test/subscription/t/027_sequences.pl
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 7d5b0b1656c..458e16e8be5 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -9527,6 +9527,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11362,6 +11367,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index 7c7c27bf7ce..9da8274ae2c 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -56,7 +58,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -123,6 +136,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 0b027cc3462..8f28cf03f40 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -147,7 +147,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ replication of tables and sequences that were added to the subscribed-to publications
since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -164,7 +164,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
Specifies whether to copy pre-existing data in the publications
that are being subscribed to when the replication starts.
The default is <literal>true</literal>. (Previously-subscribed
- tables are not copied.)
+ tables and sequences are not copied.)
</para>
</listitem>
</varlistentry>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index e14ca2f5630..1a9e05ba98b 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -54,7 +54,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -131,7 +132,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -503,6 +505,11 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ /* skip sequences here */
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ continue;
+
result = GetPubPartitionOptionRelations(result, pub_partopt,
pubrel->prrelid);
}
@@ -517,6 +524,49 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of relation oids for a publication (sequences only).
+ *
+ * This should only be used for normal publications, the FOR ALL TABLES
+ * should use GetAllSequencesPublicationRelations().
+ */
+List *
+GetPublicationSequenceRelations(Oid pubid)
+{
+ List *result;
+ Relation pubrelsrel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications associated with the relation. */
+ pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_rel_prpubid,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(pubid));
+
+ scan = systable_beginscan(pubrelsrel, PublicationRelPrrelidPrpubidIndexId,
+ true, NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Form_pg_publication_rel pubrel;
+
+ pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ }
+
+ systable_endscan(scan);
+ table_close(pubrelsrel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of publication oids for publications marked as FOR ALL TABLES.
*/
@@ -762,6 +812,46 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -784,10 +874,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -937,3 +1029,56 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ sequences = GetPublicationSequenceRelations(publication->oid);
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 3cb69b1f87b..b5cc33aca34 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -374,6 +374,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPT.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0e4bb97fb73..3bc2e8ccb66 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -59,6 +60,12 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
AlterPublicationStmt *stmt);
static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static List *OpenSequenceList(List *sequences);
+static void CloseSequenceList(List *rels);
+static void PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt);
+static void PublicationDropSequences(Oid pubid, List *rels, bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
List *options,
@@ -77,6 +84,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -101,6 +109,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -123,6 +132,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -149,7 +160,9 @@ parse_publication_options(ParseState *pstate,
*/
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas,
+ List **schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -167,12 +180,23 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLES_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA:
+ schemaid = get_namespace_oid(pubobj->name, false);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA:
@@ -186,6 +210,21 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
list_free(search_path);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA:
+ search_path = fetch_search_path(false);
+ if (search_path == NIL) /* nothing valid in search_path? */
+ ereport(ERROR,
+ errcode(ERRCODE_UNDEFINED_SCHEMA),
+ errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+ schemaid = linitial_oid(search_path);
+ list_free(search_path);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
default:
@@ -251,7 +290,10 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
bool publish_via_partition_root_given;
bool publish_via_partition_root;
AclResult aclresult;
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
/* must have CREATE privilege on database */
@@ -306,6 +348,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -330,26 +374,40 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
}
else
{
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
- if (list_length(schemaidlist) > 0 && !superuser())
+ if (list_length(tables_schemaidlist) > 0 && !superuser())
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES IN SCHEMA publication"));
- if (list_length(relations) > 0)
+ if (list_length(tables) > 0)
{
List *rels;
- rels = OpenTableList(relations);
- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ rels = OpenTableList(tables);
+ CheckObjSchemaNotAlreadyInPublication(rels, tables_schemaidlist,
PUBLICATIONOBJ_TABLE);
PublicationAddTables(puboid, rels, true, NULL);
CloseTableList(rels);
}
+ if (list_length(sequences) > 0)
+ {
+ List *rels;
+
+ rels = OpenSequenceList(sequences);
+ CheckObjSchemaNotAlreadyInPublication(rels, sequences_schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddTables(puboid, rels, true, NULL);
+ CloseSequenceList(rels);
+ }
+
if (list_length(schemaidlist) > 0)
{
/*
@@ -653,12 +711,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
*/
static void
CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
- List *tables, List *schemaidlist)
+ List *tables, List *tables_schemaidlist,
+ List *sequences, List *sequences_schemaidlist)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
- schemaidlist && !superuser())
+ (tables_schemaidlist || sequences_schemaidlist) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to add or set schemas")));
@@ -667,13 +726,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
* Check that user is allowed to manipulate the publication tables in
* schema
*/
- if (schemaidlist && pubform->puballtables)
+ if (tables_schemaidlist && pubform->puballtables)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables from schema cannot be added to, dropped from, or set on FOR ALL TABLES publications.")));
+ /*
+ * Check that user is allowed to manipulate the publication sequences in
+ * schema
+ */
+ if (sequences_schemaidlist && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
/* Check that user is allowed to manipulate the publication tables. */
if (tables && pubform->puballtables)
ereport(ERROR,
@@ -681,6 +751,108 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (sequences && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+ List *sequences, List *schemaidlist)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /*
+ * It is quite possible that for the SET case user has not specified any
+ * tables in which case we need to remove all the existing tables.
+ */
+ if (!sequences && stmt->action != AP_SetObjects)
+ return;
+
+ rels = OpenSequenceList(sequences);
+
+ if (stmt->action == AP_AddObjects)
+ {
+ List *schemas = NIL;
+
+ /*
+ * Check if the relation is member of the existing schema in the
+ * publication or member of the schema list specified.
+ */
+ schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid));
+ CheckObjSchemaNotAlreadyInPublication(rels, schemas,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddSequences(pubid, rels, false, stmt);
+ }
+ else if (stmt->action == AP_DropObjects)
+ PublicationDropSequences(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ PublicationRelInfo *newpubrel;
+
+ newpubrel = (PublicationRelInfo *) lfirst(newlc);
+ if (RelationGetRelid(newpubrel->relation) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+ /* Not yet in the list, open it and add to the list */
+ if (!found)
+ {
+ Relation oldrel;
+ PublicationRelInfo *pubrel;
+
+ /* Wrap relation into PublicationRelInfo */
+ oldrel = table_open(oldrelid, ShareUpdateExclusiveLock);
+
+ pubrel = palloc(sizeof(PublicationRelInfo));
+ pubrel->relation = oldrel;
+
+ delrels = lappend(delrels, pubrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropSequences(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddSequences(pubid, rels, true, stmt);
+
+ CloseSequenceList(delrels);
+ }
+
+ CloseSequenceList(rels);
}
/*
@@ -718,13 +890,21 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
AlterPublicationOptions(pstate, stmt, rel, tup);
else
{
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
- CheckAlterPublication(stmt, tup, relations, schemaidlist);
+ CheckAlterPublication(stmt, tup,
+ tables, tables_schemaidlist,
+ sequences, sequences_schemaidlist);
/*
* Lock the publication so nobody else can do anything with it. This
@@ -749,7 +929,9 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
errmsg("publication \"%s\" does not exist",
stmt->pubname));
- AlterPublicationTables(stmt, tup, relations, schemaidlist);
+ AlterPublicationTables(stmt, tup, tables, tables_schemaidlist);
+ AlterPublicationSequences(stmt, tup, sequences, sequences_schemaidlist);
+
AlterPublicationSchemas(stmt, tup, schemaidlist);
}
@@ -1157,6 +1339,144 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
}
}
+/*
+ * Open relations specified by a PublicationTable list.
+ * In the returned list of PublicationRelInfo, tables are locked
+ * in ShareUpdateExclusiveLock mode in order to add them to a publication.
+ */
+static List *
+OpenSequenceList(List *sequences)
+{
+ List *relids = NIL;
+ List *rels = NIL;
+ ListCell *lc;
+
+ /*
+ * Open, share-lock, and check all the explicitly-specified relations
+ */
+ foreach(lc, sequences)
+ {
+ PublicationTable *s = lfirst_node(PublicationTable, lc);
+ Relation rel;
+ Oid myrelid;
+ PublicationRelInfo *pub_rel;
+
+ /* Allow query cancel in case this takes a long time */
+ CHECK_FOR_INTERRUPTS();
+
+ rel = table_openrv(s->relation, ShareUpdateExclusiveLock);
+ myrelid = RelationGetRelid(rel);
+
+ /*
+ * Filter out duplicates if user specifies "foo, foo".
+ *
+ * Note that this algorithm is known to not be very efficient (O(N^2))
+ * but given that it only works on list of tables given to us by user
+ * it's deemed acceptable.
+ */
+ if (list_member_oid(relids, myrelid))
+ {
+ table_close(rel, ShareUpdateExclusiveLock);
+ continue;
+ }
+
+ pub_rel = palloc(sizeof(PublicationRelInfo));
+ pub_rel->relation = rel;
+ rels = lappend(rels, pub_rel);
+ relids = lappend_oid(relids, myrelid);
+ }
+
+ list_free(relids);
+
+ return rels;
+}
+
+/*
+ * Close all relations in the list.
+ */
+static void
+CloseSequenceList(List *rels)
+{
+ ListCell *lc;
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel;
+
+ pub_rel = (PublicationRelInfo *) lfirst(lc);
+ table_close(pub_rel->relation, NoLock);
+ }
+}
+
+/*
+ * Add listed tables to the publication.
+ */
+static void
+PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt)
+{
+ ListCell *lc;
+
+ Assert(!stmt || !stmt->for_all_sequences);
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel = (PublicationRelInfo *) lfirst(lc);
+ Relation rel = pub_rel->relation;
+ ObjectAddress obj;
+
+ /* Must be owner of the sequence or superuser. */
+ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind),
+ RelationGetRelationName(rel));
+
+ obj = publication_add_relation(pubid, pub_rel, if_not_exists);
+ if (stmt)
+ {
+ EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
+ (Node *) stmt);
+
+ InvokeObjectPostCreateHook(PublicationRelRelationId,
+ obj.objectId, 0);
+ }
+ }
+}
+
+/*
+ * Remove listed sequences from the publication.
+ */
+static void
+PublicationDropSequences(Oid pubid, List *rels, bool missing_ok)
+{
+ ObjectAddress obj;
+ ListCell *lc;
+ Oid prid;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ Oid relid = RelationGetRelid(rel);
+
+ prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
+ ObjectIdGetDatum(relid),
+ ObjectIdGetDatum(pubid));
+ if (!OidIsValid(prid))
+ {
+ if (missing_ok)
+ continue;
+
+ ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+ errmsg("relation \"%s\" is not part of the publication",
+ RelationGetRelationName(rel))));
+ }
+
+ ObjectAddressSet(obj, PublicationRelRelationId, prid);
+ performDeletion(&obj, DROP_CASCADE, 0);
+ }
+}
+
+
/*
* Internal workhorse for changing a publication owner
*/
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 358fb28b5bd..ef155e24db4 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -336,6 +336,85 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*/
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 3ef6607d246..5beb67e7652 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -85,6 +85,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -496,6 +497,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -534,6 +536,26 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -706,6 +728,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -797,6 +823,183 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1616,6 +1819,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 313c87398b2..78f14119d98 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -608,7 +608,7 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 90b5da51c95..64d64d338d3 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4850,6 +4850,7 @@ _copyCreatePublicationStmt(const CreatePublicationStmt *from)
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
COPY_SCALAR_FIELD(for_all_tables);
+ COPY_SCALAR_FIELD(for_all_sequences);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 06345da3ba8..0ad40ebe4a8 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2336,6 +2336,7 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
COMPARE_SCALAR_FIELD(action);
return true;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index b5966712ce1..d26528bd6ef 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -9760,6 +9760,26 @@ PublicationObjSpec:
$$->pubobjtype = PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA;
$$->location = @5;
}
+ | SEQUENCE relation_expr
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+ $$->pubtable = makeNode(PublicationTable);
+ $$->pubtable->relation = $2;
+ }
+ | ALL SEQUENCES IN_P SCHEMA ColId
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA;
+ $$->name = $5;
+ $$->location = @5;
+ }
+ | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA;
+ $$->location = @5;
+ }
| ColId
{
$$ = makeNode(PublicationObjSpec);
@@ -10095,6 +10115,12 @@ UnlistenStmt:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
@@ -10103,6 +10129,12 @@ UnlistenStmt:
* BEGIN / COMMIT / ROLLBACK
* (also older versions END / ABORT)
*
+ * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]
+ *
*****************************************************************************/
TransactionStmt:
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 953942692ce..e8ead1387ae 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -647,6 +647,56 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1203,6 +1253,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e596b69d466..c85867ee46b 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -359,6 +360,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -873,6 +880,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ elog(WARNING, "sequence %s info last_value %ld log_cnt %ld is_called %d",
+ quote_qualified_identifier(lrel.nspname, lrel.relname),
+ last_value, log_cnt, is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1134,10 +1234,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index d77bb32bb9e..68708d3907f 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -144,6 +144,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -1093,6 +1094,61 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by ResetSequence2 */
+ elog(WARNING, "locking sequence %d in exclusive mode", relid);
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ elog(WARNING, "applying sequence %s.%s transactional %d last_value %ld log_cnt %ld is_called %d",
+ seq.nspname, seq.seqname, seq.transactional, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2421,6 +2477,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index af8d51aee99..03a41d7ce07 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -49,6 +49,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -159,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -175,6 +180,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -188,6 +194,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -195,6 +202,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -260,6 +268,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -856,6 +874,51 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(rel))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, RelationGetRelid(rel));
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ rel,
+ xid,
+ sequence_lsn,
+ transactional,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1138,7 +1201,8 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->streamed_txns = NIL;
entry->replicate_valid = false;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->publish_as_relid = InvalidOid;
entry->map = NULL; /* will be set by maybe_send_schema() if
* needed */
@@ -1160,6 +1224,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Oid publish_as_relid = relid;
bool am_partition = get_rel_relispartition(relid);
char relkind = get_rel_relkind(relid);
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1183,12 +1248,23 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1243,10 +1319,12 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
}
if (entry->pubactions.pubinsert && entry->pubactions.pubupdate &&
- entry->pubactions.pubdelete && entry->pubactions.pubtruncate)
+ entry->pubactions.pubdelete && entry->pubactions.pubtruncate &&
+ entry->pubactions.pubsequence)
break;
}
@@ -1391,6 +1469,7 @@ rel_sync_cache_publication_cb(Datum arg, int cacheid, uint32 hashvalue)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
}
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 2e760e8a3bd..28fb78eb690 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5585,6 +5585,7 @@ GetRelationPublicationActions(Relation relation)
pubactions->pubupdate |= pubform->pubupdate;
pubactions->pubdelete |= pubform->pubdelete;
pubactions->pubtruncate |= pubform->pubtruncate;
+ pubactions->pubsequence |= pubform->pubsequence;
ReleaseSysCache(tup);
@@ -5593,7 +5594,8 @@ GetRelationPublicationActions(Relation relation)
* other publications.
*/
if (pubactions->pubinsert && pubactions->pubupdate &&
- pubactions->pubdelete && pubactions->pubtruncate)
+ pubactions->pubdelete && pubactions->pubtruncate &&
+ pubactions->pubsequence)
break;
}
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 4c62e7b1b41..110c472cf72 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1691,13 +1691,13 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> DROP */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
- COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY(Query_for_list_of_schemas
" AND nspname != 'pg_catalog' "
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 0859dc81cac..ea86c6e3933 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11529,6 +11529,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 841b9b6c253..e56286772f4 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct Publication
@@ -79,6 +89,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -121,6 +132,9 @@ extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
Oid relid);
+extern List *GetAllSequencesPublicationRelations(void);
+extern List *GetPublicationSequenceRelations(Oid pubid);
+
extern bool is_publishable_relation(Relation rel);
extern bool is_schema_publication(Oid pubid);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *targetrel,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index b9015a6f710..b7ea1dae9c4 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3e9bdc781f9..6dcb01b036a 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3654,6 +3654,10 @@ typedef enum PublicationObjSpecType
PUBLICATIONOBJ_TABLES_IN_SCHEMA, /* All tables in schema */
PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA, /* All tables in first element of
* search_path */
+ PUBLICATIONOBJ_SEQUENCE, /* Sequence type */
+ PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA, /* Sequences in schema type */
+ PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA, /* Get the first element of
+ * search_path */
PUBLICATIONOBJ_CONTINUATION /* Continuation of previous type */
} PublicationObjSpecType;
@@ -3673,6 +3677,7 @@ typedef struct CreatePublicationStmt
List *options; /* List of DefElem nodes */
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
} CreatePublicationStmt;
typedef enum AlterPublicationAction
@@ -3696,6 +3701,7 @@ typedef struct AlterPublicationStmt
*/
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
AlterPublicationAction action; /* What action to perform with the given
* objects */
} AlterPublicationStmt;
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 22fffaca62d..8f8c325522d 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -60,6 +60,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -117,6 +118,18 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -227,6 +240,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index 78aa9151ef5..a6f6843ada6 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -28,6 +28,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index d652f7b5fb4..dba7fadd038 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1451,6 +1451,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gpt(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gpt.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/subscription/t/027_sequences.pl b/src/test/subscription/t/027_sequences.pl
new file mode 100644
index 00000000000..fd941619071
--- /dev/null
+++ b/src/test/subscription/t/027_sequences.pl
@@ -0,0 +1,196 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More tests => 6;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub WITH (slot_name = seq_sub_slot)"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a rolled-back transaction - should not be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the new sequence in a transaction, and roll it back - in this case
+# it should not be replicated at commit
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+$node_subscriber->stop('fast');
+$node_publisher->stop('fast');
--
2.31.1
I've polished & pushed the first part adding sequence decoding
infrastructure etc. Attached are the two remaining parts.
I plan to wait a day or two and then push the test_decoding part. The
last part (for built-in replication) will need more work and maybe
rethinking the grammar etc.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Add-support-for-decoding-sequences-to-test_-20220210.patchtext/x-patch; charset=UTF-8; name=0001-Add-support-for-decoding-sequences-to-test_-20220210.patchDownload
From e22abd743b923f8b590948db0eac3e296d1b7b04 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Fri, 24 Sep 2021 00:42:04 +0200
Subject: [PATCH 1/2] Add support for decoding sequences to test_decoding
---
contrib/test_decoding/Makefile | 3 +-
contrib/test_decoding/expected/ddl.out | 12 +-
.../expected/decoding_in_xact.out | 2 +-
.../expected/decoding_into_rel.out | 10 +-
contrib/test_decoding/expected/mxact.out | 8 +-
.../test_decoding/expected/ondisk_startup.out | 4 +-
contrib/test_decoding/expected/replorigin.out | 4 +-
contrib/test_decoding/expected/rewrite.out | 4 +-
contrib/test_decoding/expected/sequence.out | 327 ++++++++++++++++++
contrib/test_decoding/expected/slot.out | 2 +-
contrib/test_decoding/expected/toast.out | 10 +-
contrib/test_decoding/expected/truncate.out | 2 +-
contrib/test_decoding/specs/mxact.spec | 2 +-
.../test_decoding/specs/ondisk_startup.spec | 2 +-
contrib/test_decoding/sql/ddl.sql | 12 +-
.../test_decoding/sql/decoding_in_xact.sql | 2 +-
.../test_decoding/sql/decoding_into_rel.sql | 10 +-
contrib/test_decoding/sql/replorigin.sql | 4 +-
contrib/test_decoding/sql/rewrite.sql | 4 +-
contrib/test_decoding/sql/sequence.sql | 119 +++++++
contrib/test_decoding/sql/slot.sql | 2 +-
contrib/test_decoding/sql/toast.sql | 10 +-
contrib/test_decoding/sql/truncate.sql | 2 +-
contrib/test_decoding/test_decoding.c | 67 ++++
24 files changed, 569 insertions(+), 55 deletions(-)
create mode 100644 contrib/test_decoding/expected/sequence.out
create mode 100644 contrib/test_decoding/sql/sequence.sql
diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index 9a31e0b8795..56ddc3abaeb 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -5,7 +5,8 @@ PGFILEDESC = "test_decoding - example of a logical decoding output plugin"
REGRESS = ddl xact rewrite toast permissions decoding_in_xact \
decoding_into_rel binary prepared replorigin time messages \
- spill slot truncate stream stats twophase twophase_stream
+ spill slot truncate stream stats twophase twophase_stream \
+ sequence
ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \
oldest_xmin snapshot_transfer subxact_without_top concurrent_stream \
twophase_snapshot
diff --git a/contrib/test_decoding/expected/ddl.out b/contrib/test_decoding/expected/ddl.out
index 4ff0044c787..82898201ca0 100644
--- a/contrib/test_decoding/expected/ddl.out
+++ b/contrib/test_decoding/expected/ddl.out
@@ -89,7 +89,7 @@ COMMIT;
ALTER TABLE replication_example RENAME COLUMN text TO somenum;
INSERT INTO replication_example(somedata, somenum) VALUES (4, 1);
-- collect all changes
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
---------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -196,7 +196,7 @@ CREATE TABLE tr_unique(id2 serial unique NOT NULL, data int);
INSERT INTO tr_unique(data) VALUES(10);
ALTER TABLE tr_unique RENAME TO tr_pkey;
ALTER TABLE tr_pkey ADD COLUMN id serial primary key;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-rewrites', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-rewrites', '1', 'include-sequences', '0');
data
-----------------------------------------------------------------------------
BEGIN
@@ -242,7 +242,7 @@ INSERT INTO tr_oddlength VALUES('ab', 'foo');
COMMIT;
/* display results, but hide most of the output */
SELECT count(*), min(data), max(data)
-FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0')
GROUP BY substring(data, 1, 24)
ORDER BY 1,2;
count | min | max
@@ -309,7 +309,7 @@ RELEASE SAVEPOINT c;
INSERT INTO tr_sub(path) VALUES ('1-top-2-#1');
RELEASE SAVEPOINT b;
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
----------------------------------------------------------------------
BEGIN
@@ -496,7 +496,7 @@ Options: user_catalog_table=false
INSERT INTO replication_metadata(relation, options)
VALUES ('zaphod', NULL);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -613,7 +613,7 @@ INSERT INTO toasttable(toasted_col2) SELECT repeat(string_agg(to_char(g.i, 'FM00
UPDATE toasttable
SET toasted_col1 = (SELECT string_agg(g.i::text, '') FROM generate_series(1, 2000) g(i))
WHERE id = 1;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/decoding_in_xact.out b/contrib/test_decoding/expected/decoding_in_xact.out
index b65253f4630..6e97b6e34bc 100644
--- a/contrib/test_decoding/expected/decoding_in_xact.out
+++ b/contrib/test_decoding/expected/decoding_in_xact.out
@@ -58,7 +58,7 @@ SELECT pg_current_xact_id() = '0';
-- don't show yet, haven't committed
INSERT INTO nobarf(data) VALUES('2');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
-----------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/decoding_into_rel.out b/contrib/test_decoding/expected/decoding_into_rel.out
index 8fd3390066d..03966b8b1ca 100644
--- a/contrib/test_decoding/expected/decoding_into_rel.out
+++ b/contrib/test_decoding/expected/decoding_into_rel.out
@@ -19,7 +19,7 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'inc
CREATE TABLE somechange(id serial primary key);
INSERT INTO somechange DEFAULT VALUES;
CREATE TABLE changeresult AS
- SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT * FROM changeresult;
data
------------------------------------------------
@@ -29,9 +29,9 @@ SELECT * FROM changeresult;
(3 rows)
INSERT INTO changeresult
- SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
INSERT INTO changeresult
- SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT * FROM changeresult;
data
--------------------------------------------------------------------------------------------------------------------------------------------------
@@ -63,7 +63,7 @@ DROP TABLE somechange;
CREATE FUNCTION slot_changes_wrapper(slot_name name) RETURNS SETOF TEXT AS $$
BEGIN
RETURN QUERY
- SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
END$$ LANGUAGE plpgsql;
SELECT * FROM slot_changes_wrapper('regression_slot');
slot_changes_wrapper
@@ -84,7 +84,7 @@ SELECT * FROM slot_changes_wrapper('regression_slot');
COMMIT
(14 rows)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/mxact.out b/contrib/test_decoding/expected/mxact.out
index 03ad3df0999..c5bc26c8044 100644
--- a/contrib/test_decoding/expected/mxact.out
+++ b/contrib/test_decoding/expected/mxact.out
@@ -7,7 +7,7 @@ step s0init: SELECT 'init' FROM pg_create_logical_replication_slot('isolation_sl
init
(1 row)
-step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
----
(0 rows)
@@ -27,7 +27,7 @@ t
(1 row)
step s0w: INSERT INTO do_write DEFAULT VALUES;
-step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
--------------------------------------------
BEGIN
@@ -50,7 +50,7 @@ step s0init: SELECT 'init' FROM pg_create_logical_replication_slot('isolation_sl
init
(1 row)
-step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
----
(0 rows)
@@ -71,7 +71,7 @@ t
step s0alter: ALTER TABLE do_write ADD column ts timestamptz;
step s0w: INSERT INTO do_write DEFAULT VALUES;
-step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/ondisk_startup.out b/contrib/test_decoding/expected/ondisk_startup.out
index bc7ff071648..3d2fa4a92e0 100644
--- a/contrib/test_decoding/expected/ondisk_startup.out
+++ b/contrib/test_decoding/expected/ondisk_startup.out
@@ -35,7 +35,7 @@ init
step s2c: COMMIT;
step s1insert: INSERT INTO do_write DEFAULT VALUES;
step s1checkpoint: CHECKPOINT;
-step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
--------------------------------------------------------------------
BEGIN
@@ -46,7 +46,7 @@ COMMIT
step s1insert: INSERT INTO do_write DEFAULT VALUES;
step s1alter: ALTER TABLE do_write ADD COLUMN addedbys1 int;
step s1insert: INSERT INTO do_write DEFAULT VALUES;
-step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
--------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/replorigin.out b/contrib/test_decoding/expected/replorigin.out
index 2e9ef7c823b..fb96aa31727 100644
--- a/contrib/test_decoding/expected/replorigin.out
+++ b/contrib/test_decoding/expected/replorigin.out
@@ -72,9 +72,9 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_d
-- origin tx
INSERT INTO origin_tbl(data) VALUES ('will be replicated and decoded and decoded again');
INSERT INTO target_tbl(data)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- as is normal, the insert into target_tbl shows up
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/rewrite.out b/contrib/test_decoding/expected/rewrite.out
index b30999c436b..0b5eade41fa 100644
--- a/contrib/test_decoding/expected/rewrite.out
+++ b/contrib/test_decoding/expected/rewrite.out
@@ -64,7 +64,7 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_d
CREATE TABLE replication_example(id SERIAL PRIMARY KEY, somedata int, text varchar(120));
INSERT INTO replication_example(somedata) VALUES (1);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
----------------------------------------------------------------------------------------------------------
BEGIN
@@ -141,7 +141,7 @@ VACUUM FULL pg_proc; VACUUM FULL pg_description; VACUUM FULL pg_shdescription; V
INSERT INTO replication_example(somedata, testcolumn1, testcolumn3) VALUES (8, 6, 1);
VACUUM FULL pg_proc; VACUUM FULL pg_description; VACUUM FULL pg_shdescription; VACUUM FULL iamalargetable;
INSERT INTO replication_example(somedata, testcolumn1, testcolumn3) VALUES (9, 7, 1);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
new file mode 100644
index 00000000000..cf21f3a1f4f
--- /dev/null
+++ b/contrib/test_decoding/expected/sequence.out
@@ -0,0 +1,327 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+ ?column?
+----------
+ init
+(1 row)
+
+CREATE SEQUENCE test_sequence;
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4
+(1 row)
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 14
+(1 row)
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4000
+(1 row)
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, true);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, false);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3500
+(1 row)
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
+ data
+----------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 1 log_cnt: 0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value: 33 log_cnt: 0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 4 log_cnt: 0 is_called:1
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value: 334 log_cnt: 0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 14 log_cnt: 32 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 14 log_cnt: 0 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 4000 log_cnt: 0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value: 4320 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value: 3500 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value: 3830 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value: 3500 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value: 3830 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value: 3500 log_cnt: 0 is_called:0
+ sequence public.test_sequence: transactional:0 last_value: 3820 log_cnt: 0 is_called:1
+(24 rows)
+
+DROP SEQUENCE test_sequence;
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3002
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3003
+(1 row)
+
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 6001
+(1 row)
+
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ data
+------
+(0 rows)
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ data
+------
+(0 rows)
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3003 | 30 | t
+(1 row)
+
+SELECT nextval('test_table_a_seq');
+ nextval
+---------
+ 3004
+(1 row)
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
+ data
+-------------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 last_value: 1 log_cnt: 0 is_called:0
+ COMMIT
+ sequence public.test_table_a_seq: transactional:0 last_value: 33 log_cnt: 0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value: 3000 log_cnt: 0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value: 3033 log_cnt: 0 is_called:1
+ BEGIN
+ COMMIT
+(8 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
+ data
+-----------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 last_value: 1 log_cnt: 0 is_called:0
+ sequence public.test_table_a_seq: transactional:1 last_value: 33 log_cnt: 0 is_called:1
+ table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ COMMIT
+(6 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ setval
+--------
+ 5000
+(1 row)
+
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3001 | 32 | t
+(1 row)
+
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
+ data
+----------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 1 log_cnt: 0 is_called:0
+ sequence public.test_sequence: transactional:1 last_value: 33 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:1 last_value: 3000 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:1 last_value: 3033 log_cnt: 0 is_called:1
+ COMMIT
+(6 rows)
+
+SELECT pg_drop_replication_slot('regression_slot');
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
diff --git a/contrib/test_decoding/expected/slot.out b/contrib/test_decoding/expected/slot.out
index 63a9940f73a..93d7b95d47a 100644
--- a/contrib/test_decoding/expected/slot.out
+++ b/contrib/test_decoding/expected/slot.out
@@ -95,7 +95,7 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot2', 'test_
(1 row)
INSERT INTO replication_example(somedata, text) VALUES (1, 3);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
---------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/toast.out b/contrib/test_decoding/expected/toast.out
index cd03e9d50a1..a060e3b5e52 100644
--- a/contrib/test_decoding/expected/toast.out
+++ b/contrib/test_decoding/expected/toast.out
@@ -52,7 +52,7 @@ CREATE TABLE toasted_copy (
);
ALTER TABLE toasted_copy ALTER COLUMN data SET STORAGE EXTERNAL;
\copy toasted_copy FROM STDIN
-SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
substr
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -316,7 +316,7 @@ SELECT pg_column_size(toasted_key) > 2^16 FROM toasted_several;
t
(1 row)
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
regexp_replace
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -327,7 +327,7 @@ SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_sl
-- test update of a toasted key without changing it
UPDATE toasted_several SET toasted_col1 = toasted_key;
UPDATE toasted_several SET toasted_col2 = toasted_col1;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
regexp_replace
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -350,7 +350,7 @@ UPDATE toasted_several SET toasted_col1 = toasted_col2 WHERE id = 1;
DELETE FROM toasted_several WHERE id = 1;
COMMIT;
DROP TABLE toasted_several;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0')
WHERE data NOT LIKE '%INSERT: %';
regexp_replace
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
@@ -373,7 +373,7 @@ INSERT INTO tbl1 VALUES(1, repeat('a', 4000)) ;
ALTER TABLE tbl1 ADD COLUMN id serial primary key;
INSERT INTO tbl2 VALUES(1);
commit;
-SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
substr
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/truncate.out b/contrib/test_decoding/expected/truncate.out
index e64d377214a..961b0ebdc89 100644
--- a/contrib/test_decoding/expected/truncate.out
+++ b/contrib/test_decoding/expected/truncate.out
@@ -11,7 +11,7 @@ CREATE TABLE tab2 (a int primary key, b int);
TRUNCATE tab1;
TRUNCATE tab1, tab1 RESTART IDENTITY CASCADE;
TRUNCATE tab1, tab2;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/specs/mxact.spec b/contrib/test_decoding/specs/mxact.spec
index ea5b1aa2d67..19f3af14b30 100644
--- a/contrib/test_decoding/specs/mxact.spec
+++ b/contrib/test_decoding/specs/mxact.spec
@@ -13,7 +13,7 @@ teardown
session "s0"
setup { SET synchronous_commit=on; }
step "s0init" {SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot', 'test_decoding');}
-step "s0start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');}
+step "s0start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');}
step "s0alter" {ALTER TABLE do_write ADD column ts timestamptz; }
step "s0w" { INSERT INTO do_write DEFAULT VALUES; }
diff --git a/contrib/test_decoding/specs/ondisk_startup.spec b/contrib/test_decoding/specs/ondisk_startup.spec
index 96ce87f1a45..fb43cbc5e15 100644
--- a/contrib/test_decoding/specs/ondisk_startup.spec
+++ b/contrib/test_decoding/specs/ondisk_startup.spec
@@ -16,7 +16,7 @@ session "s1"
setup { SET synchronous_commit=on; }
step "s1init" {SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot', 'test_decoding');}
-step "s1start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');}
+step "s1start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');}
step "s1insert" { INSERT INTO do_write DEFAULT VALUES; }
step "s1checkpoint" { CHECKPOINT; }
step "s1alter" { ALTER TABLE do_write ADD COLUMN addedbys1 int; }
diff --git a/contrib/test_decoding/sql/ddl.sql b/contrib/test_decoding/sql/ddl.sql
index 1b3866d0153..f677460d349 100644
--- a/contrib/test_decoding/sql/ddl.sql
+++ b/contrib/test_decoding/sql/ddl.sql
@@ -64,7 +64,7 @@ ALTER TABLE replication_example RENAME COLUMN text TO somenum;
INSERT INTO replication_example(somedata, somenum) VALUES (4, 1);
-- collect all changes
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
ALTER TABLE replication_example ALTER COLUMN somenum TYPE int4 USING (somenum::int4);
-- check that this doesn't produce any changes from the heap rewrite
@@ -97,7 +97,7 @@ CREATE TABLE tr_unique(id2 serial unique NOT NULL, data int);
INSERT INTO tr_unique(data) VALUES(10);
ALTER TABLE tr_unique RENAME TO tr_pkey;
ALTER TABLE tr_pkey ADD COLUMN id serial primary key;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-rewrites', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-rewrites', '1', 'include-sequences', '0');
INSERT INTO tr_pkey(data) VALUES(1);
--show deletion with primary key
@@ -121,7 +121,7 @@ COMMIT;
/* display results, but hide most of the output */
SELECT count(*), min(data), max(data)
-FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0')
GROUP BY substring(data, 1, 24)
ORDER BY 1,2;
@@ -173,7 +173,7 @@ INSERT INTO tr_sub(path) VALUES ('1-top-2-#1');
RELEASE SAVEPOINT b;
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- check that we handle xlog assignments correctly
BEGIN;
@@ -287,7 +287,7 @@ ALTER TABLE replication_metadata SET (user_catalog_table = false);
INSERT INTO replication_metadata(relation, options)
VALUES ('zaphod', NULL);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
/*
* check whether we handle updates/deletes correct with & without a pkey
@@ -398,7 +398,7 @@ UPDATE toasttable
SET toasted_col1 = (SELECT string_agg(g.i::text, '') FROM generate_series(1, 2000) g(i))
WHERE id = 1;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
INSERT INTO toasttable(toasted_col1) SELECT string_agg(g.i::text, '') FROM generate_series(1, 2000) g(i);
diff --git a/contrib/test_decoding/sql/decoding_in_xact.sql b/contrib/test_decoding/sql/decoding_in_xact.sql
index 108782dc2e9..33a9c4a6c77 100644
--- a/contrib/test_decoding/sql/decoding_in_xact.sql
+++ b/contrib/test_decoding/sql/decoding_in_xact.sql
@@ -32,7 +32,7 @@ BEGIN;
SELECT pg_current_xact_id() = '0';
-- don't show yet, haven't committed
INSERT INTO nobarf(data) VALUES('2');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
COMMIT;
INSERT INTO nobarf(data) VALUES('3');
diff --git a/contrib/test_decoding/sql/decoding_into_rel.sql b/contrib/test_decoding/sql/decoding_into_rel.sql
index 1068cec5888..27d5d08879e 100644
--- a/contrib/test_decoding/sql/decoding_into_rel.sql
+++ b/contrib/test_decoding/sql/decoding_into_rel.sql
@@ -15,14 +15,14 @@ CREATE TABLE somechange(id serial primary key);
INSERT INTO somechange DEFAULT VALUES;
CREATE TABLE changeresult AS
- SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT * FROM changeresult;
INSERT INTO changeresult
- SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
INSERT INTO changeresult
- SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT * FROM changeresult;
DROP TABLE changeresult;
@@ -32,11 +32,11 @@ DROP TABLE somechange;
CREATE FUNCTION slot_changes_wrapper(slot_name name) RETURNS SETOF TEXT AS $$
BEGIN
RETURN QUERY
- SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
END$$ LANGUAGE plpgsql;
SELECT * FROM slot_changes_wrapper('regression_slot');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT 'stop' FROM pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/sql/replorigin.sql b/contrib/test_decoding/sql/replorigin.sql
index 2e28a487773..f0f4dd49642 100644
--- a/contrib/test_decoding/sql/replorigin.sql
+++ b/contrib/test_decoding/sql/replorigin.sql
@@ -41,10 +41,10 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_d
-- origin tx
INSERT INTO origin_tbl(data) VALUES ('will be replicated and decoded and decoded again');
INSERT INTO target_tbl(data)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- as is normal, the insert into target_tbl shows up
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
INSERT INTO origin_tbl(data) VALUES ('will be replicated, but not decoded again');
diff --git a/contrib/test_decoding/sql/rewrite.sql b/contrib/test_decoding/sql/rewrite.sql
index 62dead3a9b1..945c39eb416 100644
--- a/contrib/test_decoding/sql/rewrite.sql
+++ b/contrib/test_decoding/sql/rewrite.sql
@@ -35,7 +35,7 @@ SELECT pg_relation_size((SELECT reltoastrelid FROM pg_class WHERE oid = 'pg_shde
SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
CREATE TABLE replication_example(id SERIAL PRIMARY KEY, somedata int, text varchar(120));
INSERT INTO replication_example(somedata) VALUES (1);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
BEGIN;
INSERT INTO replication_example(somedata) VALUES (2);
@@ -98,7 +98,7 @@ VACUUM FULL pg_proc; VACUUM FULL pg_description; VACUUM FULL pg_shdescription; V
INSERT INTO replication_example(somedata, testcolumn1, testcolumn3) VALUES (8, 6, 1);
VACUUM FULL pg_proc; VACUUM FULL pg_description; VACUUM FULL pg_shdescription; VACUUM FULL iamalargetable;
INSERT INTO replication_example(somedata, testcolumn1, testcolumn3) VALUES (9, 7, 1);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT pg_drop_replication_slot('regression_slot');
DROP TABLE IF EXISTS replication_example;
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
new file mode 100644
index 00000000000..42ad9113bf4
--- /dev/null
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -0,0 +1,119 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+
+CREATE SEQUENCE test_sequence;
+
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, true);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, false);
+SELECT nextval('test_sequence');
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
+DROP SEQUENCE test_sequence;
+
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+SELECT nextval('test_table_a_seq');
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '0');
+
+SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/sql/slot.sql b/contrib/test_decoding/sql/slot.sql
index 1aa27c56674..70ea1603e67 100644
--- a/contrib/test_decoding/sql/slot.sql
+++ b/contrib/test_decoding/sql/slot.sql
@@ -50,7 +50,7 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot2', 'test_
INSERT INTO replication_example(somedata, text) VALUES (1, 3);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
INSERT INTO replication_example(somedata, text) VALUES (1, 4);
diff --git a/contrib/test_decoding/sql/toast.sql b/contrib/test_decoding/sql/toast.sql
index d1c560a174d..f5d4fc082ed 100644
--- a/contrib/test_decoding/sql/toast.sql
+++ b/contrib/test_decoding/sql/toast.sql
@@ -264,7 +264,7 @@ ALTER TABLE toasted_copy ALTER COLUMN data SET STORAGE EXTERNAL;
202 untoasted199
203 untoasted200
\.
-SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- test we can decode "old" tuples bigger than the max heap tuple size correctly
DROP TABLE IF EXISTS toasted_several;
@@ -287,13 +287,13 @@ UPDATE pg_attribute SET attstorage = 'x' WHERE attrelid = 'toasted_several_pkey'
INSERT INTO toasted_several(toasted_key) VALUES(repeat('9876543210', 10000));
SELECT pg_column_size(toasted_key) > 2^16 FROM toasted_several;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- test update of a toasted key without changing it
UPDATE toasted_several SET toasted_col1 = toasted_key;
UPDATE toasted_several SET toasted_col2 = toasted_col1;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
/*
* update with large tuplebuf, in a transaction large enough to force to spool to disk
@@ -306,7 +306,7 @@ COMMIT;
DROP TABLE toasted_several;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0')
WHERE data NOT LIKE '%INSERT: %';
/*
@@ -322,6 +322,6 @@ INSERT INTO tbl1 VALUES(1, repeat('a', 4000)) ;
ALTER TABLE tbl1 ADD COLUMN id serial primary key;
INSERT INTO tbl2 VALUES(1);
commit;
-SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/sql/truncate.sql b/contrib/test_decoding/sql/truncate.sql
index 5633854e0df..b0f36fd681b 100644
--- a/contrib/test_decoding/sql/truncate.sql
+++ b/contrib/test_decoding/sql/truncate.sql
@@ -10,5 +10,5 @@ TRUNCATE tab1;
TRUNCATE tab1, tab1 RESTART IDENTITY CASCADE;
TRUNCATE tab1, tab2;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index 42fe91a2f9c..526080f3bbb 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -35,6 +35,7 @@ typedef struct
bool include_timestamp;
bool skip_empty_xacts;
bool only_local;
+ bool include_sequences;
} TestDecodingData;
/*
@@ -76,6 +77,10 @@ static void pg_decode_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
TransactionId xid,
const char *gid);
@@ -116,6 +121,10 @@ static void pg_decode_stream_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_stream_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static void pg_decode_stream_truncate(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
int nrelations, Relation relations[],
@@ -141,6 +150,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->filter_by_origin_cb = pg_decode_filter;
cb->shutdown_cb = pg_decode_shutdown;
cb->message_cb = pg_decode_message;
+ cb->sequence_cb = pg_decode_sequence;
cb->filter_prepare_cb = pg_decode_filter_prepare;
cb->begin_prepare_cb = pg_decode_begin_prepare_txn;
cb->prepare_cb = pg_decode_prepare_txn;
@@ -153,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pg_decode_stream_commit;
cb->stream_change_cb = pg_decode_stream_change;
cb->stream_message_cb = pg_decode_stream_message;
+ cb->stream_sequence_cb = pg_decode_stream_sequence;
cb->stream_truncate_cb = pg_decode_stream_truncate;
}
@@ -173,6 +184,7 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
data->include_xids = true;
data->include_timestamp = false;
data->skip_empty_xacts = false;
+ data->include_sequences = true;
data->only_local = false;
ctx->output_plugin_private = data;
@@ -265,6 +277,17 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "include-sequences") == 0)
+ {
+
+ if (elem->arg == NULL)
+ data->include_sequences = false;
+ else if (!parse_bool(strVal(elem->arg), &data->include_sequences))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not parse value \"%s\" for parameter \"%s\"",
+ strVal(elem->arg), elem->defname)));
+ }
else
{
ereport(ERROR,
@@ -744,6 +767,28 @@ pg_decode_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip sequences */
+ if (!data->include_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d last_value: " INT64_FORMAT " log_cnt: " INT64_FORMAT " is_called:%d",
+ transactional, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
static void
pg_decode_stream_start(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn)
@@ -943,6 +988,28 @@ pg_decode_stream_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_stream_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+
+ /* return if requested to skip sequences */
+ if (!data->include_sequences)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "streaming sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": transactional:%d last_value: " INT64_FORMAT " log_cnt: " INT64_FORMAT " is_called:%d",
+ transactional, last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* In streaming mode, we don't display the detailed information of Truncate.
* See pg_decode_stream_change.
--
2.34.1
0002-Add-support-for-decoding-sequences-to-built-20220210.patchtext/x-patch; charset=UTF-8; name=0002-Add-support-for-decoding-sequences-to-built-20220210.patchDownload
From 0e21d24429c95ac7769a3f66b4693c307cf942aa Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Thu, 10 Feb 2022 15:18:59 +0100
Subject: [PATCH 2/2] Add support for decoding sequences to built-in
replication
---
doc/src/sgml/catalogs.sgml | 71 ++++
doc/src/sgml/ref/alter_publication.sgml | 24 +-
doc/src/sgml/ref/alter_subscription.sgml | 4 +-
src/backend/catalog/pg_publication.c | 149 ++++++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 350 +++++++++++++++++++-
src/backend/commands/sequence.c | 79 +++++
src/backend/commands/subscriptioncmds.c | 272 +++++++++++++++
src/backend/executor/execReplication.c | 2 +-
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 1 +
src/backend/parser/gram.y | 32 ++
src/backend/replication/logical/proto.c | 52 +++
src/backend/replication/logical/tablesync.c | 118 ++++++-
src/backend/replication/logical/worker.c | 60 ++++
src/backend/replication/pgoutput/pgoutput.c | 85 ++++-
src/backend/utils/cache/relcache.c | 4 +-
src/bin/psql/tab-complete.c | 14 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 14 +
src/include/commands/sequence.h | 1 +
src/include/nodes/parsenodes.h | 6 +
src/include/replication/logicalproto.h | 19 ++
src/include/replication/pgoutput.h | 1 +
src/test/regress/expected/rules.out | 8 +
src/test/subscription/t/027_sequences.pl | 196 +++++++++++
26 files changed, 1542 insertions(+), 36 deletions(-)
create mode 100644 src/test/subscription/t/027_sequences.pl
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 879d2dbce03..271dc03e5a2 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -9540,6 +9540,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11375,6 +11380,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index 7c7c27bf7ce..9da8274ae2c 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -56,7 +58,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -123,6 +136,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 0b027cc3462..8f28cf03f40 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -147,7 +147,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ replication of tables and sequences that were added to the subscribed-to publications
since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -164,7 +164,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
Specifies whether to copy pre-existing data in the publications
that are being subscribed to when the replication starts.
The default is <literal>true</literal>. (Previously-subscribed
- tables are not copied.)
+ tables and sequences are not copied.)
</para>
</listitem>
</varlistentry>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index e14ca2f5630..1a9e05ba98b 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -54,7 +54,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -131,7 +132,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -503,6 +505,11 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ /* skip sequences here */
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ continue;
+
result = GetPubPartitionOptionRelations(result, pub_partopt,
pubrel->prrelid);
}
@@ -517,6 +524,49 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of relation oids for a publication (sequences only).
+ *
+ * This should only be used for normal publications, the FOR ALL TABLES
+ * should use GetAllSequencesPublicationRelations().
+ */
+List *
+GetPublicationSequenceRelations(Oid pubid)
+{
+ List *result;
+ Relation pubrelsrel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications associated with the relation. */
+ pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_rel_prpubid,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(pubid));
+
+ scan = systable_beginscan(pubrelsrel, PublicationRelPrrelidPrpubidIndexId,
+ true, NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Form_pg_publication_rel pubrel;
+
+ pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ }
+
+ systable_endscan(scan);
+ table_close(pubrelsrel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of publication oids for publications marked as FOR ALL TABLES.
*/
@@ -762,6 +812,46 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -784,10 +874,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -937,3 +1029,56 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ sequences = GetPublicationSequenceRelations(publication->oid);
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 3cb69b1f87b..b5cc33aca34 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -374,6 +374,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPT.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0e4bb97fb73..3bc2e8ccb66 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -59,6 +60,12 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
AlterPublicationStmt *stmt);
static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static List *OpenSequenceList(List *sequences);
+static void CloseSequenceList(List *rels);
+static void PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt);
+static void PublicationDropSequences(Oid pubid, List *rels, bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
List *options,
@@ -77,6 +84,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -101,6 +109,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -123,6 +132,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -149,7 +160,9 @@ parse_publication_options(ParseState *pstate,
*/
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas,
+ List **schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -167,12 +180,23 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLES_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA:
+ schemaid = get_namespace_oid(pubobj->name, false);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA:
@@ -186,6 +210,21 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
list_free(search_path);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA:
+ search_path = fetch_search_path(false);
+ if (search_path == NIL) /* nothing valid in search_path? */
+ ereport(ERROR,
+ errcode(ERRCODE_UNDEFINED_SCHEMA),
+ errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+ schemaid = linitial_oid(search_path);
+ list_free(search_path);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
default:
@@ -251,7 +290,10 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
bool publish_via_partition_root_given;
bool publish_via_partition_root;
AclResult aclresult;
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
/* must have CREATE privilege on database */
@@ -306,6 +348,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -330,26 +374,40 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
}
else
{
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
- if (list_length(schemaidlist) > 0 && !superuser())
+ if (list_length(tables_schemaidlist) > 0 && !superuser())
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES IN SCHEMA publication"));
- if (list_length(relations) > 0)
+ if (list_length(tables) > 0)
{
List *rels;
- rels = OpenTableList(relations);
- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ rels = OpenTableList(tables);
+ CheckObjSchemaNotAlreadyInPublication(rels, tables_schemaidlist,
PUBLICATIONOBJ_TABLE);
PublicationAddTables(puboid, rels, true, NULL);
CloseTableList(rels);
}
+ if (list_length(sequences) > 0)
+ {
+ List *rels;
+
+ rels = OpenSequenceList(sequences);
+ CheckObjSchemaNotAlreadyInPublication(rels, sequences_schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddTables(puboid, rels, true, NULL);
+ CloseSequenceList(rels);
+ }
+
if (list_length(schemaidlist) > 0)
{
/*
@@ -653,12 +711,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
*/
static void
CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
- List *tables, List *schemaidlist)
+ List *tables, List *tables_schemaidlist,
+ List *sequences, List *sequences_schemaidlist)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
- schemaidlist && !superuser())
+ (tables_schemaidlist || sequences_schemaidlist) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to add or set schemas")));
@@ -667,13 +726,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
* Check that user is allowed to manipulate the publication tables in
* schema
*/
- if (schemaidlist && pubform->puballtables)
+ if (tables_schemaidlist && pubform->puballtables)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables from schema cannot be added to, dropped from, or set on FOR ALL TABLES publications.")));
+ /*
+ * Check that user is allowed to manipulate the publication sequences in
+ * schema
+ */
+ if (sequences_schemaidlist && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
/* Check that user is allowed to manipulate the publication tables. */
if (tables && pubform->puballtables)
ereport(ERROR,
@@ -681,6 +751,108 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (sequences && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+ List *sequences, List *schemaidlist)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /*
+ * It is quite possible that for the SET case user has not specified any
+ * tables in which case we need to remove all the existing tables.
+ */
+ if (!sequences && stmt->action != AP_SetObjects)
+ return;
+
+ rels = OpenSequenceList(sequences);
+
+ if (stmt->action == AP_AddObjects)
+ {
+ List *schemas = NIL;
+
+ /*
+ * Check if the relation is member of the existing schema in the
+ * publication or member of the schema list specified.
+ */
+ schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid));
+ CheckObjSchemaNotAlreadyInPublication(rels, schemas,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddSequences(pubid, rels, false, stmt);
+ }
+ else if (stmt->action == AP_DropObjects)
+ PublicationDropSequences(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ PublicationRelInfo *newpubrel;
+
+ newpubrel = (PublicationRelInfo *) lfirst(newlc);
+ if (RelationGetRelid(newpubrel->relation) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+ /* Not yet in the list, open it and add to the list */
+ if (!found)
+ {
+ Relation oldrel;
+ PublicationRelInfo *pubrel;
+
+ /* Wrap relation into PublicationRelInfo */
+ oldrel = table_open(oldrelid, ShareUpdateExclusiveLock);
+
+ pubrel = palloc(sizeof(PublicationRelInfo));
+ pubrel->relation = oldrel;
+
+ delrels = lappend(delrels, pubrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropSequences(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddSequences(pubid, rels, true, stmt);
+
+ CloseSequenceList(delrels);
+ }
+
+ CloseSequenceList(rels);
}
/*
@@ -718,13 +890,21 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
AlterPublicationOptions(pstate, stmt, rel, tup);
else
{
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
- CheckAlterPublication(stmt, tup, relations, schemaidlist);
+ CheckAlterPublication(stmt, tup,
+ tables, tables_schemaidlist,
+ sequences, sequences_schemaidlist);
/*
* Lock the publication so nobody else can do anything with it. This
@@ -749,7 +929,9 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
errmsg("publication \"%s\" does not exist",
stmt->pubname));
- AlterPublicationTables(stmt, tup, relations, schemaidlist);
+ AlterPublicationTables(stmt, tup, tables, tables_schemaidlist);
+ AlterPublicationSequences(stmt, tup, sequences, sequences_schemaidlist);
+
AlterPublicationSchemas(stmt, tup, schemaidlist);
}
@@ -1157,6 +1339,144 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
}
}
+/*
+ * Open relations specified by a PublicationTable list.
+ * In the returned list of PublicationRelInfo, tables are locked
+ * in ShareUpdateExclusiveLock mode in order to add them to a publication.
+ */
+static List *
+OpenSequenceList(List *sequences)
+{
+ List *relids = NIL;
+ List *rels = NIL;
+ ListCell *lc;
+
+ /*
+ * Open, share-lock, and check all the explicitly-specified relations
+ */
+ foreach(lc, sequences)
+ {
+ PublicationTable *s = lfirst_node(PublicationTable, lc);
+ Relation rel;
+ Oid myrelid;
+ PublicationRelInfo *pub_rel;
+
+ /* Allow query cancel in case this takes a long time */
+ CHECK_FOR_INTERRUPTS();
+
+ rel = table_openrv(s->relation, ShareUpdateExclusiveLock);
+ myrelid = RelationGetRelid(rel);
+
+ /*
+ * Filter out duplicates if user specifies "foo, foo".
+ *
+ * Note that this algorithm is known to not be very efficient (O(N^2))
+ * but given that it only works on list of tables given to us by user
+ * it's deemed acceptable.
+ */
+ if (list_member_oid(relids, myrelid))
+ {
+ table_close(rel, ShareUpdateExclusiveLock);
+ continue;
+ }
+
+ pub_rel = palloc(sizeof(PublicationRelInfo));
+ pub_rel->relation = rel;
+ rels = lappend(rels, pub_rel);
+ relids = lappend_oid(relids, myrelid);
+ }
+
+ list_free(relids);
+
+ return rels;
+}
+
+/*
+ * Close all relations in the list.
+ */
+static void
+CloseSequenceList(List *rels)
+{
+ ListCell *lc;
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel;
+
+ pub_rel = (PublicationRelInfo *) lfirst(lc);
+ table_close(pub_rel->relation, NoLock);
+ }
+}
+
+/*
+ * Add listed tables to the publication.
+ */
+static void
+PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt)
+{
+ ListCell *lc;
+
+ Assert(!stmt || !stmt->for_all_sequences);
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel = (PublicationRelInfo *) lfirst(lc);
+ Relation rel = pub_rel->relation;
+ ObjectAddress obj;
+
+ /* Must be owner of the sequence or superuser. */
+ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind),
+ RelationGetRelationName(rel));
+
+ obj = publication_add_relation(pubid, pub_rel, if_not_exists);
+ if (stmt)
+ {
+ EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
+ (Node *) stmt);
+
+ InvokeObjectPostCreateHook(PublicationRelRelationId,
+ obj.objectId, 0);
+ }
+ }
+}
+
+/*
+ * Remove listed sequences from the publication.
+ */
+static void
+PublicationDropSequences(Oid pubid, List *rels, bool missing_ok)
+{
+ ObjectAddress obj;
+ ListCell *lc;
+ Oid prid;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ Oid relid = RelationGetRelid(rel);
+
+ prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
+ ObjectIdGetDatum(relid),
+ ObjectIdGetDatum(pubid));
+ if (!OidIsValid(prid))
+ {
+ if (missing_ok)
+ continue;
+
+ ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+ errmsg("relation \"%s\" is not part of the publication",
+ RelationGetRelationName(rel))));
+ }
+
+ ObjectAddressSet(obj, PublicationRelRelationId, prid);
+ performDeletion(&obj, DROP_CASCADE, 0);
+ }
+}
+
+
/*
* Internal workhorse for changing a publication owner
*/
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index ab592ce2f15..fe4f21ec438 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -336,6 +336,85 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*/
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 3ef6607d246..5beb67e7652 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -85,6 +85,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -496,6 +497,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -534,6 +536,26 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -706,6 +728,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -797,6 +823,183 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1616,6 +1819,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 313c87398b2..78f14119d98 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -608,7 +608,7 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 6bd95bbce24..8b7e9710401 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4852,6 +4852,7 @@ _copyCreatePublicationStmt(const CreatePublicationStmt *from)
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
COPY_SCALAR_FIELD(for_all_tables);
+ COPY_SCALAR_FIELD(for_all_sequences);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 4126516222b..55a7dbbddf3 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2337,6 +2337,7 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
COMPARE_SCALAR_FIELD(action);
return true;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index c4f32425060..bbde765cf56 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -9771,6 +9771,26 @@ PublicationObjSpec:
$$->pubobjtype = PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA;
$$->location = @5;
}
+ | SEQUENCE relation_expr
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+ $$->pubtable = makeNode(PublicationTable);
+ $$->pubtable->relation = $2;
+ }
+ | ALL SEQUENCES IN_P SCHEMA ColId
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA;
+ $$->name = $5;
+ $$->location = @5;
+ }
+ | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA;
+ $$->location = @5;
+ }
| ColId
{
$$ = makeNode(PublicationObjSpec);
@@ -10106,6 +10126,12 @@ UnlistenStmt:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
@@ -10114,6 +10140,12 @@ UnlistenStmt:
* BEGIN / COMMIT / ROLLBACK
* (also older versions END / ABORT)
*
+ * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]
+ *
*****************************************************************************/
TransactionStmt:
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 953942692ce..e8ead1387ae 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -647,6 +647,56 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1203,6 +1253,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e596b69d466..c85867ee46b 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -359,6 +360,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -873,6 +880,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ elog(WARNING, "sequence %s info last_value %ld log_cnt %ld is_called %d",
+ quote_qualified_identifier(lrel.nspname, lrel.relname),
+ last_value, log_cnt, is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1134,10 +1234,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index d77bb32bb9e..68708d3907f 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -144,6 +144,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -1093,6 +1094,61 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by ResetSequence2 */
+ elog(WARNING, "locking sequence %d in exclusive mode", relid);
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ elog(WARNING, "applying sequence %s.%s transactional %d last_value %ld log_cnt %ld is_called %d",
+ seq.nspname, seq.seqname, seq.transactional, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2421,6 +2477,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 6df705f90ff..b82c6b10305 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -49,6 +49,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -161,6 +165,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -177,6 +182,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -190,6 +196,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -197,6 +204,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -262,6 +270,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -858,6 +876,51 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(rel))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, RelationGetRelid(rel));
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ rel,
+ xid,
+ sequence_lsn,
+ transactional,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1141,7 +1204,8 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->schema_sent = false;
entry->streamed_txns = NIL;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->publish_as_relid = InvalidOid;
entry->map = NULL; /* will be set by maybe_send_schema() if
* needed */
@@ -1163,6 +1227,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Oid publish_as_relid = relid;
bool am_partition = get_rel_relispartition(relid);
char relkind = get_rel_relkind(relid);
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1191,6 +1256,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
if (entry->map)
{
/*
@@ -1213,12 +1279,23 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1275,10 +1352,12 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
}
if (entry->pubactions.pubinsert && entry->pubactions.pubupdate &&
- entry->pubactions.pubdelete && entry->pubactions.pubtruncate)
+ entry->pubactions.pubdelete && entry->pubactions.pubtruncate &&
+ entry->pubactions.pubsequence)
break;
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 2707fed12f4..45a8b3e490a 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5586,6 +5586,7 @@ GetRelationPublicationActions(Relation relation)
pubactions->pubupdate |= pubform->pubupdate;
pubactions->pubdelete |= pubform->pubdelete;
pubactions->pubtruncate |= pubform->pubtruncate;
+ pubactions->pubsequence |= pubform->pubsequence;
ReleaseSysCache(tup);
@@ -5594,7 +5595,8 @@ GetRelationPublicationActions(Relation relation)
* other publications.
*/
if (pubactions->pubinsert && pubactions->pubupdate &&
- pubactions->pubdelete && pubactions->pubtruncate)
+ pubactions->pubdelete && pubactions->pubtruncate &&
+ pubactions->pubsequence)
break;
}
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 98882272130..4900c48ff19 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1782,20 +1782,20 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
- else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") ||
- (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") &&
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE|SEQUENCE") ||
+ (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE|SEQUENCE") &&
ends_with(prev_wd, ',')))
COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);
- else if (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE"))
+ else if (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE|SEQUENCE"))
COMPLETE_WITH(",");
/* ALTER PUBLICATION <name> DROP */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
- COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE");
- else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
+ COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES|SEQUENCES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY_PLUS(Query_for_list_of_schemas
" AND nspname NOT LIKE E'pg\\\\_%'",
"CURRENT_SCHEMA");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62f36daa981..a66eb8109d0 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11546,6 +11546,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 841b9b6c253..e56286772f4 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct Publication
@@ -79,6 +89,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -121,6 +132,9 @@ extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
Oid relid);
+extern List *GetAllSequencesPublicationRelations(void);
+extern List *GetPublicationSequenceRelations(Oid pubid);
+
extern bool is_publishable_relation(Relation rel);
extern bool is_schema_publication(Oid pubid);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *targetrel,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9fecc41954e..d8c255a7af5 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 37fcc4c9b5a..4a990364e4a 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3656,6 +3656,10 @@ typedef enum PublicationObjSpecType
PUBLICATIONOBJ_TABLES_IN_SCHEMA, /* All tables in schema */
PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA, /* All tables in first element of
* search_path */
+ PUBLICATIONOBJ_SEQUENCE, /* Sequence type */
+ PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA, /* Sequences in schema type */
+ PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA, /* Get the first element of
+ * search_path */
PUBLICATIONOBJ_CONTINUATION /* Continuation of previous type */
} PublicationObjSpecType;
@@ -3675,6 +3679,7 @@ typedef struct CreatePublicationStmt
List *options; /* List of DefElem nodes */
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
} CreatePublicationStmt;
typedef enum AlterPublicationAction
@@ -3698,6 +3703,7 @@ typedef struct AlterPublicationStmt
*/
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
AlterPublicationAction action; /* What action to perform with the given
* objects */
} AlterPublicationStmt;
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 22fffaca62d..8f8c325522d 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -60,6 +60,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -117,6 +118,18 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -227,6 +240,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index 78aa9151ef5..a6f6843ada6 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -28,6 +28,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 1420288d67b..3f50e100f8d 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1429,6 +1429,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gpt(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gpt.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/subscription/t/027_sequences.pl b/src/test/subscription/t/027_sequences.pl
new file mode 100644
index 00000000000..fd941619071
--- /dev/null
+++ b/src/test/subscription/t/027_sequences.pl
@@ -0,0 +1,196 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More tests => 6;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub WITH (slot_name = seq_sub_slot)"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a rolled-back transaction - should not be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the new sequence in a transaction, and roll it back - in this case
+# it should not be replicated at commit
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+$node_subscriber->stop('fast');
+$node_publisher->stop('fast');
--
2.34.1
On 2/10/22 19:17, Tomas Vondra wrote:
I've polished & pushed the first part adding sequence decoding
infrastructure etc. Attached are the two remaining parts.I plan to wait a day or two and then push the test_decoding part. The
last part (for built-in replication) will need more work and maybe
rethinking the grammar etc.
I've pushed the second part, adding sequences to test_decoding.
Here's the remaining part, rebased, with a small tweak in the TAP test
to eliminate the issue with not waiting for sequence increments. I've
kept the tweak in a separate patch, so that we can throw it away easily
if we happen to resolve the issue.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Add-support-for-decoding-sequences-to-built-20220212.patchtext/x-patch; charset=UTF-8; name=0001-Add-support-for-decoding-sequences-to-built-20220212.patchDownload
From c456270469cdb6ad769455eb3d16aea6db2c02af Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Thu, 10 Feb 2022 15:18:59 +0100
Subject: [PATCH 1/2] Add support for decoding sequences to built-in
replication
---
doc/src/sgml/catalogs.sgml | 71 ++++
doc/src/sgml/ref/alter_publication.sgml | 24 +-
doc/src/sgml/ref/alter_subscription.sgml | 4 +-
src/backend/catalog/pg_publication.c | 149 ++++++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 350 +++++++++++++++++++-
src/backend/commands/sequence.c | 79 +++++
src/backend/commands/subscriptioncmds.c | 272 +++++++++++++++
src/backend/executor/execReplication.c | 2 +-
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 1 +
src/backend/parser/gram.y | 32 ++
src/backend/replication/logical/proto.c | 52 +++
src/backend/replication/logical/tablesync.c | 118 ++++++-
src/backend/replication/logical/worker.c | 60 ++++
src/backend/replication/pgoutput/pgoutput.c | 85 ++++-
src/backend/utils/cache/relcache.c | 4 +-
src/bin/psql/tab-complete.c | 14 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 14 +
src/include/commands/sequence.h | 1 +
src/include/nodes/parsenodes.h | 6 +
src/include/replication/logicalproto.h | 19 ++
src/include/replication/pgoutput.h | 1 +
src/test/regress/expected/rules.out | 8 +
src/test/subscription/t/028_sequences.pl | 196 +++++++++++
26 files changed, 1542 insertions(+), 36 deletions(-)
create mode 100644 src/test/subscription/t/028_sequences.pl
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 879d2dbce03..271dc03e5a2 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -9540,6 +9540,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11375,6 +11380,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index 7c7c27bf7ce..9da8274ae2c 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -56,7 +58,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -123,6 +136,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 0b027cc3462..8f28cf03f40 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -147,7 +147,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ replication of tables and sequences that were added to the subscribed-to publications
since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -164,7 +164,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
Specifies whether to copy pre-existing data in the publications
that are being subscribed to when the replication starts.
The default is <literal>true</literal>. (Previously-subscribed
- tables are not copied.)
+ tables and sequences are not copied.)
</para>
</listitem>
</varlistentry>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index e14ca2f5630..1a9e05ba98b 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -54,7 +54,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -131,7 +132,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -503,6 +505,11 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ /* skip sequences here */
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ continue;
+
result = GetPubPartitionOptionRelations(result, pub_partopt,
pubrel->prrelid);
}
@@ -517,6 +524,49 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of relation oids for a publication (sequences only).
+ *
+ * This should only be used for normal publications, the FOR ALL TABLES
+ * should use GetAllSequencesPublicationRelations().
+ */
+List *
+GetPublicationSequenceRelations(Oid pubid)
+{
+ List *result;
+ Relation pubrelsrel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications associated with the relation. */
+ pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_rel_prpubid,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(pubid));
+
+ scan = systable_beginscan(pubrelsrel, PublicationRelPrrelidPrpubidIndexId,
+ true, NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Form_pg_publication_rel pubrel;
+
+ pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ }
+
+ systable_endscan(scan);
+ table_close(pubrelsrel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of publication oids for publications marked as FOR ALL TABLES.
*/
@@ -762,6 +812,46 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -784,10 +874,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -937,3 +1029,56 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ sequences = GetPublicationSequenceRelations(publication->oid);
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 3cb69b1f87b..b5cc33aca34 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -374,6 +374,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPT.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0e4bb97fb73..3bc2e8ccb66 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -59,6 +60,12 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
AlterPublicationStmt *stmt);
static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static List *OpenSequenceList(List *sequences);
+static void CloseSequenceList(List *rels);
+static void PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt);
+static void PublicationDropSequences(Oid pubid, List *rels, bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
List *options,
@@ -77,6 +84,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -101,6 +109,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -123,6 +132,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -149,7 +160,9 @@ parse_publication_options(ParseState *pstate,
*/
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas,
+ List **schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -167,12 +180,23 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLES_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA:
+ schemaid = get_namespace_oid(pubobj->name, false);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA:
@@ -186,6 +210,21 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
list_free(search_path);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA:
+ search_path = fetch_search_path(false);
+ if (search_path == NIL) /* nothing valid in search_path? */
+ ereport(ERROR,
+ errcode(ERRCODE_UNDEFINED_SCHEMA),
+ errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+ schemaid = linitial_oid(search_path);
+ list_free(search_path);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
default:
@@ -251,7 +290,10 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
bool publish_via_partition_root_given;
bool publish_via_partition_root;
AclResult aclresult;
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
/* must have CREATE privilege on database */
@@ -306,6 +348,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -330,26 +374,40 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
}
else
{
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
- if (list_length(schemaidlist) > 0 && !superuser())
+ if (list_length(tables_schemaidlist) > 0 && !superuser())
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES IN SCHEMA publication"));
- if (list_length(relations) > 0)
+ if (list_length(tables) > 0)
{
List *rels;
- rels = OpenTableList(relations);
- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ rels = OpenTableList(tables);
+ CheckObjSchemaNotAlreadyInPublication(rels, tables_schemaidlist,
PUBLICATIONOBJ_TABLE);
PublicationAddTables(puboid, rels, true, NULL);
CloseTableList(rels);
}
+ if (list_length(sequences) > 0)
+ {
+ List *rels;
+
+ rels = OpenSequenceList(sequences);
+ CheckObjSchemaNotAlreadyInPublication(rels, sequences_schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddTables(puboid, rels, true, NULL);
+ CloseSequenceList(rels);
+ }
+
if (list_length(schemaidlist) > 0)
{
/*
@@ -653,12 +711,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
*/
static void
CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
- List *tables, List *schemaidlist)
+ List *tables, List *tables_schemaidlist,
+ List *sequences, List *sequences_schemaidlist)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
- schemaidlist && !superuser())
+ (tables_schemaidlist || sequences_schemaidlist) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to add or set schemas")));
@@ -667,13 +726,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
* Check that user is allowed to manipulate the publication tables in
* schema
*/
- if (schemaidlist && pubform->puballtables)
+ if (tables_schemaidlist && pubform->puballtables)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables from schema cannot be added to, dropped from, or set on FOR ALL TABLES publications.")));
+ /*
+ * Check that user is allowed to manipulate the publication sequences in
+ * schema
+ */
+ if (sequences_schemaidlist && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
/* Check that user is allowed to manipulate the publication tables. */
if (tables && pubform->puballtables)
ereport(ERROR,
@@ -681,6 +751,108 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (sequences && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+ List *sequences, List *schemaidlist)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /*
+ * It is quite possible that for the SET case user has not specified any
+ * tables in which case we need to remove all the existing tables.
+ */
+ if (!sequences && stmt->action != AP_SetObjects)
+ return;
+
+ rels = OpenSequenceList(sequences);
+
+ if (stmt->action == AP_AddObjects)
+ {
+ List *schemas = NIL;
+
+ /*
+ * Check if the relation is member of the existing schema in the
+ * publication or member of the schema list specified.
+ */
+ schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid));
+ CheckObjSchemaNotAlreadyInPublication(rels, schemas,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddSequences(pubid, rels, false, stmt);
+ }
+ else if (stmt->action == AP_DropObjects)
+ PublicationDropSequences(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ PublicationRelInfo *newpubrel;
+
+ newpubrel = (PublicationRelInfo *) lfirst(newlc);
+ if (RelationGetRelid(newpubrel->relation) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+ /* Not yet in the list, open it and add to the list */
+ if (!found)
+ {
+ Relation oldrel;
+ PublicationRelInfo *pubrel;
+
+ /* Wrap relation into PublicationRelInfo */
+ oldrel = table_open(oldrelid, ShareUpdateExclusiveLock);
+
+ pubrel = palloc(sizeof(PublicationRelInfo));
+ pubrel->relation = oldrel;
+
+ delrels = lappend(delrels, pubrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropSequences(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddSequences(pubid, rels, true, stmt);
+
+ CloseSequenceList(delrels);
+ }
+
+ CloseSequenceList(rels);
}
/*
@@ -718,13 +890,21 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
AlterPublicationOptions(pstate, stmt, rel, tup);
else
{
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
- CheckAlterPublication(stmt, tup, relations, schemaidlist);
+ CheckAlterPublication(stmt, tup,
+ tables, tables_schemaidlist,
+ sequences, sequences_schemaidlist);
/*
* Lock the publication so nobody else can do anything with it. This
@@ -749,7 +929,9 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
errmsg("publication \"%s\" does not exist",
stmt->pubname));
- AlterPublicationTables(stmt, tup, relations, schemaidlist);
+ AlterPublicationTables(stmt, tup, tables, tables_schemaidlist);
+ AlterPublicationSequences(stmt, tup, sequences, sequences_schemaidlist);
+
AlterPublicationSchemas(stmt, tup, schemaidlist);
}
@@ -1157,6 +1339,144 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
}
}
+/*
+ * Open relations specified by a PublicationTable list.
+ * In the returned list of PublicationRelInfo, tables are locked
+ * in ShareUpdateExclusiveLock mode in order to add them to a publication.
+ */
+static List *
+OpenSequenceList(List *sequences)
+{
+ List *relids = NIL;
+ List *rels = NIL;
+ ListCell *lc;
+
+ /*
+ * Open, share-lock, and check all the explicitly-specified relations
+ */
+ foreach(lc, sequences)
+ {
+ PublicationTable *s = lfirst_node(PublicationTable, lc);
+ Relation rel;
+ Oid myrelid;
+ PublicationRelInfo *pub_rel;
+
+ /* Allow query cancel in case this takes a long time */
+ CHECK_FOR_INTERRUPTS();
+
+ rel = table_openrv(s->relation, ShareUpdateExclusiveLock);
+ myrelid = RelationGetRelid(rel);
+
+ /*
+ * Filter out duplicates if user specifies "foo, foo".
+ *
+ * Note that this algorithm is known to not be very efficient (O(N^2))
+ * but given that it only works on list of tables given to us by user
+ * it's deemed acceptable.
+ */
+ if (list_member_oid(relids, myrelid))
+ {
+ table_close(rel, ShareUpdateExclusiveLock);
+ continue;
+ }
+
+ pub_rel = palloc(sizeof(PublicationRelInfo));
+ pub_rel->relation = rel;
+ rels = lappend(rels, pub_rel);
+ relids = lappend_oid(relids, myrelid);
+ }
+
+ list_free(relids);
+
+ return rels;
+}
+
+/*
+ * Close all relations in the list.
+ */
+static void
+CloseSequenceList(List *rels)
+{
+ ListCell *lc;
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel;
+
+ pub_rel = (PublicationRelInfo *) lfirst(lc);
+ table_close(pub_rel->relation, NoLock);
+ }
+}
+
+/*
+ * Add listed tables to the publication.
+ */
+static void
+PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt)
+{
+ ListCell *lc;
+
+ Assert(!stmt || !stmt->for_all_sequences);
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel = (PublicationRelInfo *) lfirst(lc);
+ Relation rel = pub_rel->relation;
+ ObjectAddress obj;
+
+ /* Must be owner of the sequence or superuser. */
+ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind),
+ RelationGetRelationName(rel));
+
+ obj = publication_add_relation(pubid, pub_rel, if_not_exists);
+ if (stmt)
+ {
+ EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
+ (Node *) stmt);
+
+ InvokeObjectPostCreateHook(PublicationRelRelationId,
+ obj.objectId, 0);
+ }
+ }
+}
+
+/*
+ * Remove listed sequences from the publication.
+ */
+static void
+PublicationDropSequences(Oid pubid, List *rels, bool missing_ok)
+{
+ ObjectAddress obj;
+ ListCell *lc;
+ Oid prid;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ Oid relid = RelationGetRelid(rel);
+
+ prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
+ ObjectIdGetDatum(relid),
+ ObjectIdGetDatum(pubid));
+ if (!OidIsValid(prid))
+ {
+ if (missing_ok)
+ continue;
+
+ ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+ errmsg("relation \"%s\" is not part of the publication",
+ RelationGetRelationName(rel))));
+ }
+
+ ObjectAddressSet(obj, PublicationRelRelationId, prid);
+ performDeletion(&obj, DROP_CASCADE, 0);
+ }
+}
+
+
/*
* Internal workhorse for changing a publication owner
*/
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index ab592ce2f15..fe4f21ec438 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -336,6 +336,85 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*/
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 3ef6607d246..5beb67e7652 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -85,6 +85,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -496,6 +497,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -534,6 +536,26 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -706,6 +728,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -797,6 +823,183 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1616,6 +1819,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 313c87398b2..78f14119d98 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -608,7 +608,7 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 6bd95bbce24..8b7e9710401 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4852,6 +4852,7 @@ _copyCreatePublicationStmt(const CreatePublicationStmt *from)
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
COPY_SCALAR_FIELD(for_all_tables);
+ COPY_SCALAR_FIELD(for_all_sequences);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 4126516222b..55a7dbbddf3 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2337,6 +2337,7 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
COMPARE_SCALAR_FIELD(action);
return true;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index c4f32425060..bbde765cf56 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -9771,6 +9771,26 @@ PublicationObjSpec:
$$->pubobjtype = PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA;
$$->location = @5;
}
+ | SEQUENCE relation_expr
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+ $$->pubtable = makeNode(PublicationTable);
+ $$->pubtable->relation = $2;
+ }
+ | ALL SEQUENCES IN_P SCHEMA ColId
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA;
+ $$->name = $5;
+ $$->location = @5;
+ }
+ | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA;
+ $$->location = @5;
+ }
| ColId
{
$$ = makeNode(PublicationObjSpec);
@@ -10106,6 +10126,12 @@ UnlistenStmt:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
@@ -10114,6 +10140,12 @@ UnlistenStmt:
* BEGIN / COMMIT / ROLLBACK
* (also older versions END / ABORT)
*
+ * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]
+ *
*****************************************************************************/
TransactionStmt:
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 953942692ce..e8ead1387ae 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -647,6 +647,56 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1203,6 +1253,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e596b69d466..c85867ee46b 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -359,6 +360,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -873,6 +880,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ elog(WARNING, "sequence %s info last_value %ld log_cnt %ld is_called %d",
+ quote_qualified_identifier(lrel.nspname, lrel.relname),
+ last_value, log_cnt, is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1134,10 +1234,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index d77bb32bb9e..68708d3907f 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -144,6 +144,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -1093,6 +1094,61 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by ResetSequence2 */
+ elog(WARNING, "locking sequence %d in exclusive mode", relid);
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ elog(WARNING, "applying sequence %s.%s transactional %d last_value %ld log_cnt %ld is_called %d",
+ seq.nspname, seq.seqname, seq.transactional, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2421,6 +2477,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 6df705f90ff..b82c6b10305 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -49,6 +49,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -161,6 +165,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -177,6 +182,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -190,6 +196,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -197,6 +204,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -262,6 +270,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -858,6 +876,51 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(rel))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, RelationGetRelid(rel));
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ rel,
+ xid,
+ sequence_lsn,
+ transactional,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1141,7 +1204,8 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->schema_sent = false;
entry->streamed_txns = NIL;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->publish_as_relid = InvalidOid;
entry->map = NULL; /* will be set by maybe_send_schema() if
* needed */
@@ -1163,6 +1227,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Oid publish_as_relid = relid;
bool am_partition = get_rel_relispartition(relid);
char relkind = get_rel_relkind(relid);
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1191,6 +1256,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
if (entry->map)
{
/*
@@ -1213,12 +1279,23 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1275,10 +1352,12 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
}
if (entry->pubactions.pubinsert && entry->pubactions.pubupdate &&
- entry->pubactions.pubdelete && entry->pubactions.pubtruncate)
+ entry->pubactions.pubdelete && entry->pubactions.pubtruncate &&
+ entry->pubactions.pubsequence)
break;
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 2707fed12f4..45a8b3e490a 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5586,6 +5586,7 @@ GetRelationPublicationActions(Relation relation)
pubactions->pubupdate |= pubform->pubupdate;
pubactions->pubdelete |= pubform->pubdelete;
pubactions->pubtruncate |= pubform->pubtruncate;
+ pubactions->pubsequence |= pubform->pubsequence;
ReleaseSysCache(tup);
@@ -5594,7 +5595,8 @@ GetRelationPublicationActions(Relation relation)
* other publications.
*/
if (pubactions->pubinsert && pubactions->pubupdate &&
- pubactions->pubdelete && pubactions->pubtruncate)
+ pubactions->pubdelete && pubactions->pubtruncate &&
+ pubactions->pubsequence)
break;
}
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 98882272130..4900c48ff19 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1782,20 +1782,20 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
- else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") ||
- (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") &&
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE|SEQUENCE") ||
+ (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE|SEQUENCE") &&
ends_with(prev_wd, ',')))
COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);
- else if (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE"))
+ else if (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE|SEQUENCE"))
COMPLETE_WITH(",");
/* ALTER PUBLICATION <name> DROP */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
- COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE");
- else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
+ COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES|SEQUENCES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY_PLUS(Query_for_list_of_schemas
" AND nspname NOT LIKE E'pg\\\\_%'",
"CURRENT_SCHEMA");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62f36daa981..a66eb8109d0 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11546,6 +11546,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 841b9b6c253..e56286772f4 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct Publication
@@ -79,6 +89,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -121,6 +132,9 @@ extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
Oid relid);
+extern List *GetAllSequencesPublicationRelations(void);
+extern List *GetPublicationSequenceRelations(Oid pubid);
+
extern bool is_publishable_relation(Relation rel);
extern bool is_schema_publication(Oid pubid);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *targetrel,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9fecc41954e..d8c255a7af5 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 37fcc4c9b5a..4a990364e4a 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3656,6 +3656,10 @@ typedef enum PublicationObjSpecType
PUBLICATIONOBJ_TABLES_IN_SCHEMA, /* All tables in schema */
PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA, /* All tables in first element of
* search_path */
+ PUBLICATIONOBJ_SEQUENCE, /* Sequence type */
+ PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA, /* Sequences in schema type */
+ PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA, /* Get the first element of
+ * search_path */
PUBLICATIONOBJ_CONTINUATION /* Continuation of previous type */
} PublicationObjSpecType;
@@ -3675,6 +3679,7 @@ typedef struct CreatePublicationStmt
List *options; /* List of DefElem nodes */
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
} CreatePublicationStmt;
typedef enum AlterPublicationAction
@@ -3698,6 +3703,7 @@ typedef struct AlterPublicationStmt
*/
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
AlterPublicationAction action; /* What action to perform with the given
* objects */
} AlterPublicationStmt;
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 22fffaca62d..8f8c325522d 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -60,6 +60,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -117,6 +118,18 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -227,6 +240,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index 78aa9151ef5..a6f6843ada6 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -28,6 +28,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 1420288d67b..3f50e100f8d 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1429,6 +1429,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gpt(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gpt.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/subscription/t/028_sequences.pl b/src/test/subscription/t/028_sequences.pl
new file mode 100644
index 00000000000..58775769cdc
--- /dev/null
+++ b/src/test/subscription/t/028_sequences.pl
@@ -0,0 +1,196 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More tests => 6;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub WITH (slot_name = seq_sub_slot)"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a rolled-back transaction - should not be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the new sequence in a transaction, and roll it back - in this case
+# it should not be replicated at commit
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+$node_subscriber->stop('fast');
+$node_publisher->stop('fast');
--
2.34.1
0002-tweak-test-20220212.patchtext/x-patch; charset=UTF-8; name=0002-tweak-test-20220212.patchDownload
From aa91b8e9a469a4fa13a8b185dfda98e45ee4b8c3 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas@2ndquadrant.com>
Date: Sat, 12 Feb 2022 01:24:47 +0100
Subject: [PATCH 2/2] tweak test
---
src/test/subscription/t/028_sequences.pl | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/src/test/subscription/t/028_sequences.pl b/src/test/subscription/t/028_sequences.pl
index 58775769cdc..58ed02462d8 100644
--- a/src/test/subscription/t/028_sequences.pl
+++ b/src/test/subscription/t/028_sequences.pl
@@ -20,6 +20,7 @@ $node_subscriber->start;
# Create some preexisting content on publisher
my $ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
CREATE SEQUENCE s;
);
@@ -29,6 +30,7 @@ $node_publisher->safe_psql('postgres', $ddl);
# Create some the same structure on subscriber, and an extra sequence that
# we'll create on the publisher later
$ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
CREATE SEQUENCE s;
CREATE SEQUENCE s2;
);
@@ -59,7 +61,7 @@ $node_subscriber->poll_query_until('postgres', $synced_query)
$node_publisher->safe_psql(
'postgres', qq(
-- generate a number of values using the sequence
- SELECT nextval('s') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
));
$node_publisher->wait_for_catchup('seq_sub');
@@ -78,7 +80,7 @@ is( $result, '132|0|t',
$node_publisher->safe_psql(
'postgres', qq(
BEGIN;
- SELECT nextval('s') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
ROLLBACK;
));
@@ -101,7 +103,7 @@ $node_publisher->safe_psql(
BEGIN;
CREATE SEQUENCE s2;
ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
- SELECT nextval('s2') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
ROLLBACK;
));
@@ -125,7 +127,7 @@ $node_publisher->safe_psql(
CREATE SEQUENCE s2;
ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
SAVEPOINT sp1;
- SELECT nextval('s2') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
ROLLBACK TO sp1;
COMMIT;
));
@@ -153,7 +155,7 @@ is( $result, '132|0|t',
$node_publisher->safe_psql(
'postgres', qq(
BEGIN;
- SELECT nextval('s2') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
ROLLBACK;
));
@@ -175,7 +177,7 @@ $node_publisher->safe_psql(
'postgres', qq(
BEGIN;
SAVEPOINT s1;
- SELECT nextval('s2') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
ROLLBACK TO s1;
COMMIT;
));
--
2.34.1
On 2/12/22 01:34, Tomas Vondra wrote:
On 2/10/22 19:17, Tomas Vondra wrote:
I've polished & pushed the first part adding sequence decoding
infrastructure etc. Attached are the two remaining parts.I plan to wait a day or two and then push the test_decoding part. The
last part (for built-in replication) will need more work and maybe
rethinking the grammar etc.I've pushed the second part, adding sequences to test_decoding.
Here's the remaining part, rebased, with a small tweak in the TAP test
to eliminate the issue with not waiting for sequence increments. I've
kept the tweak in a separate patch, so that we can throw it away easily
if we happen to resolve the issue.
Hmm, cfbot was not happy about this, so here's a version fixing the
elog() format issue reported by CirrusCI/mingw by ditching the log
message. It was useful for debugging, but otherwise just noise.
I'm a bit puzzled about the macOS failure, though. It seems as if the
test does not wait for the subscriber long enough, but this is with the
tweaked test variant, so it should not have the rollback issue. And I
haven't seen this failure on any other machine.
Regarding adding decoding of sequences to the built-in replication,
there is a couple questions that we need to discuss first before
cleaning up the code etc. Most of them are related to syntax and
handling of various sequence variants.
1) Firstly, what about implicit sequences. That is, if you create a
table with SERIAL or BIGSERIAL column, that'll have a sequence attached.
Should those sequences be added to the publication when the table gets
added? Or should we require adding them separately? Or should that be
specified in the grammar, somehow? Should we have INCLUDING SEQUENCES
for ALTER PUBLICATION ... ADD TABLE ...?
I think we shouldn't require replicating the sequence, because who knows
what the schema is on the subscriber? We want to allow differences, so
maybe the sequence is not there. I'd start with just adding them
separately, because that just seems simpler, but maybe there are good
reasons to support adding them in ADD TABLE.
2) Should it be possible to add sequences that are also associated with
a serial column, without the table being replicated too? I'd say yes, if
people want to do that - I don't think it can cause any issues, and it's
possible to just use sequence directly for non-serial columns anyway.
Which is the same thing, but we can't detect that.
3) What about sequences for UNLOGGED tables? At the moment we don't
allow sequences to be UNLOGGED (Peter revived his patch [1]/messages/by-id/8da92c1f-9117-41bc-731b-ce1477a77d69@enterprisedb.com, but that's
not committed yet). Again, I'd say it's up to the user to decide which
sequences are replicated - it's similar to (2).
4) I wonder if we actually want FOR ALL SEQUENCES. On the one hand it'd
be symmetrical with FOR ALL TABLES, which is the other object type we
can replicate. So it'd seem reasonable to handle them in a similar way.
But it's causing some shift/reduce error in the grammar, so it'll need
some changes.
regards
[1]: /messages/by-id/8da92c1f-9117-41bc-731b-ce1477a77d69@enterprisedb.com
/messages/by-id/8da92c1f-9117-41bc-731b-ce1477a77d69@enterprisedb.com
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Add-support-for-decoding-sequences-to-buil-20220212b.patchtext/x-patch; charset=UTF-8; name=0001-Add-support-for-decoding-sequences-to-buil-20220212b.patchDownload
From 960c90c6849e27eabe09dd3582b6f844c5d5a6a5 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Thu, 10 Feb 2022 15:18:59 +0100
Subject: [PATCH 1/2] Add support for decoding sequences to built-in
replication
---
doc/src/sgml/catalogs.sgml | 71 ++++
doc/src/sgml/ref/alter_publication.sgml | 24 +-
doc/src/sgml/ref/alter_subscription.sgml | 4 +-
src/backend/catalog/pg_publication.c | 149 ++++++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 350 +++++++++++++++++++-
src/backend/commands/sequence.c | 79 +++++
src/backend/commands/subscriptioncmds.c | 272 +++++++++++++++
src/backend/executor/execReplication.c | 2 +-
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 1 +
src/backend/parser/gram.y | 32 ++
src/backend/replication/logical/proto.c | 52 +++
src/backend/replication/logical/tablesync.c | 114 ++++++-
src/backend/replication/logical/worker.c | 60 ++++
src/backend/replication/pgoutput/pgoutput.c | 85 ++++-
src/backend/utils/cache/relcache.c | 4 +-
src/bin/psql/tab-complete.c | 14 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 14 +
src/include/commands/sequence.h | 1 +
src/include/nodes/parsenodes.h | 6 +
src/include/replication/logicalproto.h | 19 ++
src/include/replication/pgoutput.h | 1 +
src/test/regress/expected/rules.out | 8 +
src/test/subscription/t/028_sequences.pl | 196 +++++++++++
26 files changed, 1538 insertions(+), 36 deletions(-)
create mode 100644 src/test/subscription/t/028_sequences.pl
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 879d2dbce03..271dc03e5a2 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -9540,6 +9540,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11375,6 +11380,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index 7c7c27bf7ce..9da8274ae2c 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -56,7 +58,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -123,6 +136,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 0b027cc3462..8f28cf03f40 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -147,7 +147,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ replication of tables and sequences that were added to the subscribed-to publications
since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -164,7 +164,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
Specifies whether to copy pre-existing data in the publications
that are being subscribed to when the replication starts.
The default is <literal>true</literal>. (Previously-subscribed
- tables are not copied.)
+ tables and sequences are not copied.)
</para>
</listitem>
</varlistentry>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index e14ca2f5630..1a9e05ba98b 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -54,7 +54,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -131,7 +132,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -503,6 +505,11 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ /* skip sequences here */
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ continue;
+
result = GetPubPartitionOptionRelations(result, pub_partopt,
pubrel->prrelid);
}
@@ -517,6 +524,49 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of relation oids for a publication (sequences only).
+ *
+ * This should only be used for normal publications, the FOR ALL TABLES
+ * should use GetAllSequencesPublicationRelations().
+ */
+List *
+GetPublicationSequenceRelations(Oid pubid)
+{
+ List *result;
+ Relation pubrelsrel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications associated with the relation. */
+ pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_rel_prpubid,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(pubid));
+
+ scan = systable_beginscan(pubrelsrel, PublicationRelPrrelidPrpubidIndexId,
+ true, NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Form_pg_publication_rel pubrel;
+
+ pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ }
+
+ systable_endscan(scan);
+ table_close(pubrelsrel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of publication oids for publications marked as FOR ALL TABLES.
*/
@@ -762,6 +812,46 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -784,10 +874,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -937,3 +1029,56 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ sequences = GetPublicationSequenceRelations(publication->oid);
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 3cb69b1f87b..b5cc33aca34 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -374,6 +374,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPT.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0e4bb97fb73..3bc2e8ccb66 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -59,6 +60,12 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
AlterPublicationStmt *stmt);
static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static List *OpenSequenceList(List *sequences);
+static void CloseSequenceList(List *rels);
+static void PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt);
+static void PublicationDropSequences(Oid pubid, List *rels, bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
List *options,
@@ -77,6 +84,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -101,6 +109,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -123,6 +132,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -149,7 +160,9 @@ parse_publication_options(ParseState *pstate,
*/
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas,
+ List **schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -167,12 +180,23 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLES_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA:
+ schemaid = get_namespace_oid(pubobj->name, false);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA:
@@ -186,6 +210,21 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
list_free(search_path);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA:
+ search_path = fetch_search_path(false);
+ if (search_path == NIL) /* nothing valid in search_path? */
+ ereport(ERROR,
+ errcode(ERRCODE_UNDEFINED_SCHEMA),
+ errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+ schemaid = linitial_oid(search_path);
+ list_free(search_path);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
default:
@@ -251,7 +290,10 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
bool publish_via_partition_root_given;
bool publish_via_partition_root;
AclResult aclresult;
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
/* must have CREATE privilege on database */
@@ -306,6 +348,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -330,26 +374,40 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
}
else
{
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
- if (list_length(schemaidlist) > 0 && !superuser())
+ if (list_length(tables_schemaidlist) > 0 && !superuser())
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES IN SCHEMA publication"));
- if (list_length(relations) > 0)
+ if (list_length(tables) > 0)
{
List *rels;
- rels = OpenTableList(relations);
- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ rels = OpenTableList(tables);
+ CheckObjSchemaNotAlreadyInPublication(rels, tables_schemaidlist,
PUBLICATIONOBJ_TABLE);
PublicationAddTables(puboid, rels, true, NULL);
CloseTableList(rels);
}
+ if (list_length(sequences) > 0)
+ {
+ List *rels;
+
+ rels = OpenSequenceList(sequences);
+ CheckObjSchemaNotAlreadyInPublication(rels, sequences_schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddTables(puboid, rels, true, NULL);
+ CloseSequenceList(rels);
+ }
+
if (list_length(schemaidlist) > 0)
{
/*
@@ -653,12 +711,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
*/
static void
CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
- List *tables, List *schemaidlist)
+ List *tables, List *tables_schemaidlist,
+ List *sequences, List *sequences_schemaidlist)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
- schemaidlist && !superuser())
+ (tables_schemaidlist || sequences_schemaidlist) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to add or set schemas")));
@@ -667,13 +726,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
* Check that user is allowed to manipulate the publication tables in
* schema
*/
- if (schemaidlist && pubform->puballtables)
+ if (tables_schemaidlist && pubform->puballtables)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables from schema cannot be added to, dropped from, or set on FOR ALL TABLES publications.")));
+ /*
+ * Check that user is allowed to manipulate the publication sequences in
+ * schema
+ */
+ if (sequences_schemaidlist && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
/* Check that user is allowed to manipulate the publication tables. */
if (tables && pubform->puballtables)
ereport(ERROR,
@@ -681,6 +751,108 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (sequences && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+ List *sequences, List *schemaidlist)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /*
+ * It is quite possible that for the SET case user has not specified any
+ * tables in which case we need to remove all the existing tables.
+ */
+ if (!sequences && stmt->action != AP_SetObjects)
+ return;
+
+ rels = OpenSequenceList(sequences);
+
+ if (stmt->action == AP_AddObjects)
+ {
+ List *schemas = NIL;
+
+ /*
+ * Check if the relation is member of the existing schema in the
+ * publication or member of the schema list specified.
+ */
+ schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid));
+ CheckObjSchemaNotAlreadyInPublication(rels, schemas,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddSequences(pubid, rels, false, stmt);
+ }
+ else if (stmt->action == AP_DropObjects)
+ PublicationDropSequences(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ PublicationRelInfo *newpubrel;
+
+ newpubrel = (PublicationRelInfo *) lfirst(newlc);
+ if (RelationGetRelid(newpubrel->relation) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+ /* Not yet in the list, open it and add to the list */
+ if (!found)
+ {
+ Relation oldrel;
+ PublicationRelInfo *pubrel;
+
+ /* Wrap relation into PublicationRelInfo */
+ oldrel = table_open(oldrelid, ShareUpdateExclusiveLock);
+
+ pubrel = palloc(sizeof(PublicationRelInfo));
+ pubrel->relation = oldrel;
+
+ delrels = lappend(delrels, pubrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropSequences(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddSequences(pubid, rels, true, stmt);
+
+ CloseSequenceList(delrels);
+ }
+
+ CloseSequenceList(rels);
}
/*
@@ -718,13 +890,21 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
AlterPublicationOptions(pstate, stmt, rel, tup);
else
{
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
- CheckAlterPublication(stmt, tup, relations, schemaidlist);
+ CheckAlterPublication(stmt, tup,
+ tables, tables_schemaidlist,
+ sequences, sequences_schemaidlist);
/*
* Lock the publication so nobody else can do anything with it. This
@@ -749,7 +929,9 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
errmsg("publication \"%s\" does not exist",
stmt->pubname));
- AlterPublicationTables(stmt, tup, relations, schemaidlist);
+ AlterPublicationTables(stmt, tup, tables, tables_schemaidlist);
+ AlterPublicationSequences(stmt, tup, sequences, sequences_schemaidlist);
+
AlterPublicationSchemas(stmt, tup, schemaidlist);
}
@@ -1157,6 +1339,144 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
}
}
+/*
+ * Open relations specified by a PublicationTable list.
+ * In the returned list of PublicationRelInfo, tables are locked
+ * in ShareUpdateExclusiveLock mode in order to add them to a publication.
+ */
+static List *
+OpenSequenceList(List *sequences)
+{
+ List *relids = NIL;
+ List *rels = NIL;
+ ListCell *lc;
+
+ /*
+ * Open, share-lock, and check all the explicitly-specified relations
+ */
+ foreach(lc, sequences)
+ {
+ PublicationTable *s = lfirst_node(PublicationTable, lc);
+ Relation rel;
+ Oid myrelid;
+ PublicationRelInfo *pub_rel;
+
+ /* Allow query cancel in case this takes a long time */
+ CHECK_FOR_INTERRUPTS();
+
+ rel = table_openrv(s->relation, ShareUpdateExclusiveLock);
+ myrelid = RelationGetRelid(rel);
+
+ /*
+ * Filter out duplicates if user specifies "foo, foo".
+ *
+ * Note that this algorithm is known to not be very efficient (O(N^2))
+ * but given that it only works on list of tables given to us by user
+ * it's deemed acceptable.
+ */
+ if (list_member_oid(relids, myrelid))
+ {
+ table_close(rel, ShareUpdateExclusiveLock);
+ continue;
+ }
+
+ pub_rel = palloc(sizeof(PublicationRelInfo));
+ pub_rel->relation = rel;
+ rels = lappend(rels, pub_rel);
+ relids = lappend_oid(relids, myrelid);
+ }
+
+ list_free(relids);
+
+ return rels;
+}
+
+/*
+ * Close all relations in the list.
+ */
+static void
+CloseSequenceList(List *rels)
+{
+ ListCell *lc;
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel;
+
+ pub_rel = (PublicationRelInfo *) lfirst(lc);
+ table_close(pub_rel->relation, NoLock);
+ }
+}
+
+/*
+ * Add listed tables to the publication.
+ */
+static void
+PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt)
+{
+ ListCell *lc;
+
+ Assert(!stmt || !stmt->for_all_sequences);
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel = (PublicationRelInfo *) lfirst(lc);
+ Relation rel = pub_rel->relation;
+ ObjectAddress obj;
+
+ /* Must be owner of the sequence or superuser. */
+ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind),
+ RelationGetRelationName(rel));
+
+ obj = publication_add_relation(pubid, pub_rel, if_not_exists);
+ if (stmt)
+ {
+ EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
+ (Node *) stmt);
+
+ InvokeObjectPostCreateHook(PublicationRelRelationId,
+ obj.objectId, 0);
+ }
+ }
+}
+
+/*
+ * Remove listed sequences from the publication.
+ */
+static void
+PublicationDropSequences(Oid pubid, List *rels, bool missing_ok)
+{
+ ObjectAddress obj;
+ ListCell *lc;
+ Oid prid;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ Oid relid = RelationGetRelid(rel);
+
+ prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
+ ObjectIdGetDatum(relid),
+ ObjectIdGetDatum(pubid));
+ if (!OidIsValid(prid))
+ {
+ if (missing_ok)
+ continue;
+
+ ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+ errmsg("relation \"%s\" is not part of the publication",
+ RelationGetRelationName(rel))));
+ }
+
+ ObjectAddressSet(obj, PublicationRelRelationId, prid);
+ performDeletion(&obj, DROP_CASCADE, 0);
+ }
+}
+
+
/*
* Internal workhorse for changing a publication owner
*/
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index ab592ce2f15..fe4f21ec438 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -336,6 +336,85 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*/
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 3ef6607d246..5beb67e7652 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -85,6 +85,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -496,6 +497,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -534,6 +536,26 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -706,6 +728,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -797,6 +823,183 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1616,6 +1819,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 313c87398b2..78f14119d98 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -608,7 +608,7 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 6bd95bbce24..8b7e9710401 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4852,6 +4852,7 @@ _copyCreatePublicationStmt(const CreatePublicationStmt *from)
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
COPY_SCALAR_FIELD(for_all_tables);
+ COPY_SCALAR_FIELD(for_all_sequences);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 4126516222b..55a7dbbddf3 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2337,6 +2337,7 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
COMPARE_SCALAR_FIELD(action);
return true;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index c4f32425060..bbde765cf56 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -9771,6 +9771,26 @@ PublicationObjSpec:
$$->pubobjtype = PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA;
$$->location = @5;
}
+ | SEQUENCE relation_expr
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+ $$->pubtable = makeNode(PublicationTable);
+ $$->pubtable->relation = $2;
+ }
+ | ALL SEQUENCES IN_P SCHEMA ColId
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA;
+ $$->name = $5;
+ $$->location = @5;
+ }
+ | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA;
+ $$->location = @5;
+ }
| ColId
{
$$ = makeNode(PublicationObjSpec);
@@ -10106,6 +10126,12 @@ UnlistenStmt:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
@@ -10114,6 +10140,12 @@ UnlistenStmt:
* BEGIN / COMMIT / ROLLBACK
* (also older versions END / ABORT)
*
+ * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]
+ *
*****************************************************************************/
TransactionStmt:
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 953942692ce..e8ead1387ae 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -647,6 +647,56 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1203,6 +1253,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e596b69d466..f746c13ca44 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -359,6 +360,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -873,6 +880,95 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1134,10 +1230,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index d77bb32bb9e..68708d3907f 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -144,6 +144,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -1093,6 +1094,61 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by ResetSequence2 */
+ elog(WARNING, "locking sequence %d in exclusive mode", relid);
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ elog(WARNING, "applying sequence %s.%s transactional %d last_value %ld log_cnt %ld is_called %d",
+ seq.nspname, seq.seqname, seq.transactional, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2421,6 +2477,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 6df705f90ff..b82c6b10305 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -49,6 +49,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -161,6 +165,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -177,6 +182,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -190,6 +196,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -197,6 +204,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -262,6 +270,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -858,6 +876,51 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(rel))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, RelationGetRelid(rel));
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ rel,
+ xid,
+ sequence_lsn,
+ transactional,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1141,7 +1204,8 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->schema_sent = false;
entry->streamed_txns = NIL;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->publish_as_relid = InvalidOid;
entry->map = NULL; /* will be set by maybe_send_schema() if
* needed */
@@ -1163,6 +1227,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Oid publish_as_relid = relid;
bool am_partition = get_rel_relispartition(relid);
char relkind = get_rel_relkind(relid);
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1191,6 +1256,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
if (entry->map)
{
/*
@@ -1213,12 +1279,23 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1275,10 +1352,12 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
}
if (entry->pubactions.pubinsert && entry->pubactions.pubupdate &&
- entry->pubactions.pubdelete && entry->pubactions.pubtruncate)
+ entry->pubactions.pubdelete && entry->pubactions.pubtruncate &&
+ entry->pubactions.pubsequence)
break;
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 2707fed12f4..45a8b3e490a 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5586,6 +5586,7 @@ GetRelationPublicationActions(Relation relation)
pubactions->pubupdate |= pubform->pubupdate;
pubactions->pubdelete |= pubform->pubdelete;
pubactions->pubtruncate |= pubform->pubtruncate;
+ pubactions->pubsequence |= pubform->pubsequence;
ReleaseSysCache(tup);
@@ -5594,7 +5595,8 @@ GetRelationPublicationActions(Relation relation)
* other publications.
*/
if (pubactions->pubinsert && pubactions->pubupdate &&
- pubactions->pubdelete && pubactions->pubtruncate)
+ pubactions->pubdelete && pubactions->pubtruncate &&
+ pubactions->pubsequence)
break;
}
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 98882272130..4900c48ff19 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1782,20 +1782,20 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
- else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") ||
- (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") &&
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE|SEQUENCE") ||
+ (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE|SEQUENCE") &&
ends_with(prev_wd, ',')))
COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);
- else if (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE"))
+ else if (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE|SEQUENCE"))
COMPLETE_WITH(",");
/* ALTER PUBLICATION <name> DROP */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
- COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE");
- else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
+ COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES|SEQUENCES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY_PLUS(Query_for_list_of_schemas
" AND nspname NOT LIKE E'pg\\\\_%'",
"CURRENT_SCHEMA");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62f36daa981..a66eb8109d0 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11546,6 +11546,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 841b9b6c253..e56286772f4 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct Publication
@@ -79,6 +89,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -121,6 +132,9 @@ extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
Oid relid);
+extern List *GetAllSequencesPublicationRelations(void);
+extern List *GetPublicationSequenceRelations(Oid pubid);
+
extern bool is_publishable_relation(Relation rel);
extern bool is_schema_publication(Oid pubid);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *targetrel,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9fecc41954e..d8c255a7af5 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 37fcc4c9b5a..4a990364e4a 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3656,6 +3656,10 @@ typedef enum PublicationObjSpecType
PUBLICATIONOBJ_TABLES_IN_SCHEMA, /* All tables in schema */
PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA, /* All tables in first element of
* search_path */
+ PUBLICATIONOBJ_SEQUENCE, /* Sequence type */
+ PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA, /* Sequences in schema type */
+ PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA, /* Get the first element of
+ * search_path */
PUBLICATIONOBJ_CONTINUATION /* Continuation of previous type */
} PublicationObjSpecType;
@@ -3675,6 +3679,7 @@ typedef struct CreatePublicationStmt
List *options; /* List of DefElem nodes */
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
} CreatePublicationStmt;
typedef enum AlterPublicationAction
@@ -3698,6 +3703,7 @@ typedef struct AlterPublicationStmt
*/
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
AlterPublicationAction action; /* What action to perform with the given
* objects */
} AlterPublicationStmt;
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 22fffaca62d..8f8c325522d 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -60,6 +60,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -117,6 +118,18 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -227,6 +240,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index 78aa9151ef5..a6f6843ada6 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -28,6 +28,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 1420288d67b..3f50e100f8d 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1429,6 +1429,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gpt(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gpt.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/subscription/t/028_sequences.pl b/src/test/subscription/t/028_sequences.pl
new file mode 100644
index 00000000000..58775769cdc
--- /dev/null
+++ b/src/test/subscription/t/028_sequences.pl
@@ -0,0 +1,196 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More tests => 6;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub WITH (slot_name = seq_sub_slot)"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a rolled-back transaction - should not be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the new sequence in a transaction, and roll it back - in this case
+# it should not be replicated at commit
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+$node_subscriber->stop('fast');
+$node_publisher->stop('fast');
--
2.34.1
0002-tweak-test-20220212b.patchtext/x-patch; charset=UTF-8; name=0002-tweak-test-20220212b.patchDownload
From bd37ce394dc14d0d2962b28251e058e12a789614 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas@2ndquadrant.com>
Date: Sat, 12 Feb 2022 01:24:47 +0100
Subject: [PATCH 2/2] tweak test
---
src/test/subscription/t/028_sequences.pl | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/src/test/subscription/t/028_sequences.pl b/src/test/subscription/t/028_sequences.pl
index 58775769cdc..58ed02462d8 100644
--- a/src/test/subscription/t/028_sequences.pl
+++ b/src/test/subscription/t/028_sequences.pl
@@ -20,6 +20,7 @@ $node_subscriber->start;
# Create some preexisting content on publisher
my $ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
CREATE SEQUENCE s;
);
@@ -29,6 +30,7 @@ $node_publisher->safe_psql('postgres', $ddl);
# Create some the same structure on subscriber, and an extra sequence that
# we'll create on the publisher later
$ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
CREATE SEQUENCE s;
CREATE SEQUENCE s2;
);
@@ -59,7 +61,7 @@ $node_subscriber->poll_query_until('postgres', $synced_query)
$node_publisher->safe_psql(
'postgres', qq(
-- generate a number of values using the sequence
- SELECT nextval('s') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
));
$node_publisher->wait_for_catchup('seq_sub');
@@ -78,7 +80,7 @@ is( $result, '132|0|t',
$node_publisher->safe_psql(
'postgres', qq(
BEGIN;
- SELECT nextval('s') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
ROLLBACK;
));
@@ -101,7 +103,7 @@ $node_publisher->safe_psql(
BEGIN;
CREATE SEQUENCE s2;
ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
- SELECT nextval('s2') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
ROLLBACK;
));
@@ -125,7 +127,7 @@ $node_publisher->safe_psql(
CREATE SEQUENCE s2;
ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
SAVEPOINT sp1;
- SELECT nextval('s2') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
ROLLBACK TO sp1;
COMMIT;
));
@@ -153,7 +155,7 @@ is( $result, '132|0|t',
$node_publisher->safe_psql(
'postgres', qq(
BEGIN;
- SELECT nextval('s2') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
ROLLBACK;
));
@@ -175,7 +177,7 @@ $node_publisher->safe_psql(
'postgres', qq(
BEGIN;
SAVEPOINT s1;
- SELECT nextval('s2') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
ROLLBACK TO s1;
COMMIT;
));
--
2.34.1
On 2/12/22 20:58, Tomas Vondra wrote:
On 2/12/22 01:34, Tomas Vondra wrote:
On 2/10/22 19:17, Tomas Vondra wrote:
I've polished & pushed the first part adding sequence decoding
infrastructure etc. Attached are the two remaining parts.I plan to wait a day or two and then push the test_decoding part. The
last part (for built-in replication) will need more work and maybe
rethinking the grammar etc.I've pushed the second part, adding sequences to test_decoding.
Here's the remaining part, rebased, with a small tweak in the TAP test
to eliminate the issue with not waiting for sequence increments. I've
kept the tweak in a separate patch, so that we can throw it away easily
if we happen to resolve the issue.Hmm, cfbot was not happy about this, so here's a version fixing the
elog() format issue reported by CirrusCI/mingw by ditching the log
message. It was useful for debugging, but otherwise just noise.
There was another elog() making mingw unhappy, so here's a fix for that.
This should also fix an issue on the macOS machine. This is a thinko in
the tests, because wait_for_catchup() may not wait for all the sequence
increments after a rollback. The default mode is "write" which uses
pg_current_wal_lsn(), and that may be a bit stale after a rollback.
Doing a simple insert after the rollback fixes this (using other LSN,
like pg_current_wal_insert_lsn() would work too, but it'd cause long
waits in the test).
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Add-support-for-decoding-sequences-to-built-20220213.patchtext/x-patch; charset=UTF-8; name=0001-Add-support-for-decoding-sequences-to-built-20220213.patchDownload
From c2ab633ba457f5563769bd313dac6078b41e439b Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Thu, 10 Feb 2022 15:18:59 +0100
Subject: [PATCH 1/2] Add support for decoding sequences to built-in
replication
---
doc/src/sgml/catalogs.sgml | 71 ++++
doc/src/sgml/ref/alter_publication.sgml | 24 +-
doc/src/sgml/ref/alter_subscription.sgml | 4 +-
src/backend/catalog/pg_publication.c | 149 ++++++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 350 +++++++++++++++++++-
src/backend/commands/sequence.c | 79 +++++
src/backend/commands/subscriptioncmds.c | 272 +++++++++++++++
src/backend/executor/execReplication.c | 2 +-
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 1 +
src/backend/parser/gram.y | 32 ++
src/backend/replication/logical/proto.c | 52 +++
src/backend/replication/logical/tablesync.c | 114 ++++++-
src/backend/replication/logical/worker.c | 56 ++++
src/backend/replication/pgoutput/pgoutput.c | 85 ++++-
src/backend/utils/cache/relcache.c | 4 +-
src/bin/psql/tab-complete.c | 14 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 14 +
src/include/commands/sequence.h | 1 +
src/include/nodes/parsenodes.h | 6 +
src/include/replication/logicalproto.h | 19 ++
src/include/replication/pgoutput.h | 1 +
src/test/regress/expected/rules.out | 8 +
src/test/subscription/t/028_sequences.pl | 201 +++++++++++
26 files changed, 1539 insertions(+), 36 deletions(-)
create mode 100644 src/test/subscription/t/028_sequences.pl
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 879d2dbce03..271dc03e5a2 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -9540,6 +9540,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11375,6 +11380,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index 7c7c27bf7ce..9da8274ae2c 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -56,7 +58,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -123,6 +136,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>SET ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] )</literal></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 0b027cc3462..8f28cf03f40 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -147,7 +147,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ replication of tables and sequences that were added to the subscribed-to publications
since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -164,7 +164,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
Specifies whether to copy pre-existing data in the publications
that are being subscribed to when the replication starts.
The default is <literal>true</literal>. (Previously-subscribed
- tables are not copied.)
+ tables and sequences are not copied.)
</para>
</listitem>
</varlistentry>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index e14ca2f5630..1a9e05ba98b 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -54,7 +54,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -131,7 +132,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -503,6 +505,11 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ /* skip sequences here */
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ continue;
+
result = GetPubPartitionOptionRelations(result, pub_partopt,
pubrel->prrelid);
}
@@ -517,6 +524,49 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of relation oids for a publication (sequences only).
+ *
+ * This should only be used for normal publications, the FOR ALL TABLES
+ * should use GetAllSequencesPublicationRelations().
+ */
+List *
+GetPublicationSequenceRelations(Oid pubid)
+{
+ List *result;
+ Relation pubrelsrel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications associated with the relation. */
+ pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_rel_prpubid,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(pubid));
+
+ scan = systable_beginscan(pubrelsrel, PublicationRelPrrelidPrpubidIndexId,
+ true, NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Form_pg_publication_rel pubrel;
+
+ pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
+
+ if (get_rel_relkind(pubrel->prrelid) == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ }
+
+ systable_endscan(scan);
+ table_close(pubrelsrel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of publication oids for publications marked as FOR ALL TABLES.
*/
@@ -762,6 +812,46 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -784,10 +874,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -937,3 +1029,56 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ sequences = GetPublicationSequenceRelations(publication->oid);
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 3cb69b1f87b..b5cc33aca34 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -374,6 +374,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPT.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 0e4bb97fb73..3bc2e8ccb66 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -59,6 +60,12 @@ static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
AlterPublicationStmt *stmt);
static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static List *OpenSequenceList(List *sequences);
+static void CloseSequenceList(List *rels);
+static void PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt);
+static void PublicationDropSequences(Oid pubid, List *rels, bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
List *options,
@@ -77,6 +84,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -101,6 +109,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -123,6 +132,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -149,7 +160,9 @@ parse_publication_options(ParseState *pstate,
*/
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas,
+ List **schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -167,12 +180,23 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLES_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA:
+ schemaid = get_namespace_oid(pubobj->name, false);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA:
@@ -186,6 +210,21 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
list_free(search_path);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA:
+ search_path = fetch_search_path(false);
+ if (search_path == NIL) /* nothing valid in search_path? */
+ ereport(ERROR,
+ errcode(ERRCODE_UNDEFINED_SCHEMA),
+ errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+ schemaid = linitial_oid(search_path);
+ list_free(search_path);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
default:
@@ -251,7 +290,10 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
bool publish_via_partition_root_given;
bool publish_via_partition_root;
AclResult aclresult;
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
/* must have CREATE privilege on database */
@@ -306,6 +348,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -330,26 +374,40 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
}
else
{
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
- if (list_length(schemaidlist) > 0 && !superuser())
+ if (list_length(tables_schemaidlist) > 0 && !superuser())
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES IN SCHEMA publication"));
- if (list_length(relations) > 0)
+ if (list_length(tables) > 0)
{
List *rels;
- rels = OpenTableList(relations);
- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ rels = OpenTableList(tables);
+ CheckObjSchemaNotAlreadyInPublication(rels, tables_schemaidlist,
PUBLICATIONOBJ_TABLE);
PublicationAddTables(puboid, rels, true, NULL);
CloseTableList(rels);
}
+ if (list_length(sequences) > 0)
+ {
+ List *rels;
+
+ rels = OpenSequenceList(sequences);
+ CheckObjSchemaNotAlreadyInPublication(rels, sequences_schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddTables(puboid, rels, true, NULL);
+ CloseSequenceList(rels);
+ }
+
if (list_length(schemaidlist) > 0)
{
/*
@@ -653,12 +711,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
*/
static void
CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
- List *tables, List *schemaidlist)
+ List *tables, List *tables_schemaidlist,
+ List *sequences, List *sequences_schemaidlist)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
- schemaidlist && !superuser())
+ (tables_schemaidlist || sequences_schemaidlist) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to add or set schemas")));
@@ -667,13 +726,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
* Check that user is allowed to manipulate the publication tables in
* schema
*/
- if (schemaidlist && pubform->puballtables)
+ if (tables_schemaidlist && pubform->puballtables)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables from schema cannot be added to, dropped from, or set on FOR ALL TABLES publications.")));
+ /*
+ * Check that user is allowed to manipulate the publication sequences in
+ * schema
+ */
+ if (sequences_schemaidlist && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
/* Check that user is allowed to manipulate the publication tables. */
if (tables && pubform->puballtables)
ereport(ERROR,
@@ -681,6 +751,108 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (sequences && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+ List *sequences, List *schemaidlist)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /*
+ * It is quite possible that for the SET case user has not specified any
+ * tables in which case we need to remove all the existing tables.
+ */
+ if (!sequences && stmt->action != AP_SetObjects)
+ return;
+
+ rels = OpenSequenceList(sequences);
+
+ if (stmt->action == AP_AddObjects)
+ {
+ List *schemas = NIL;
+
+ /*
+ * Check if the relation is member of the existing schema in the
+ * publication or member of the schema list specified.
+ */
+ schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid));
+ CheckObjSchemaNotAlreadyInPublication(rels, schemas,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddSequences(pubid, rels, false, stmt);
+ }
+ else if (stmt->action == AP_DropObjects)
+ PublicationDropSequences(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ PublicationRelInfo *newpubrel;
+
+ newpubrel = (PublicationRelInfo *) lfirst(newlc);
+ if (RelationGetRelid(newpubrel->relation) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+ /* Not yet in the list, open it and add to the list */
+ if (!found)
+ {
+ Relation oldrel;
+ PublicationRelInfo *pubrel;
+
+ /* Wrap relation into PublicationRelInfo */
+ oldrel = table_open(oldrelid, ShareUpdateExclusiveLock);
+
+ pubrel = palloc(sizeof(PublicationRelInfo));
+ pubrel->relation = oldrel;
+
+ delrels = lappend(delrels, pubrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropSequences(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddSequences(pubid, rels, true, stmt);
+
+ CloseSequenceList(delrels);
+ }
+
+ CloseSequenceList(rels);
}
/*
@@ -718,13 +890,21 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
AlterPublicationOptions(pstate, stmt, rel, tup);
else
{
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
- CheckAlterPublication(stmt, tup, relations, schemaidlist);
+ CheckAlterPublication(stmt, tup,
+ tables, tables_schemaidlist,
+ sequences, sequences_schemaidlist);
/*
* Lock the publication so nobody else can do anything with it. This
@@ -749,7 +929,9 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
errmsg("publication \"%s\" does not exist",
stmt->pubname));
- AlterPublicationTables(stmt, tup, relations, schemaidlist);
+ AlterPublicationTables(stmt, tup, tables, tables_schemaidlist);
+ AlterPublicationSequences(stmt, tup, sequences, sequences_schemaidlist);
+
AlterPublicationSchemas(stmt, tup, schemaidlist);
}
@@ -1157,6 +1339,144 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
}
}
+/*
+ * Open relations specified by a PublicationTable list.
+ * In the returned list of PublicationRelInfo, tables are locked
+ * in ShareUpdateExclusiveLock mode in order to add them to a publication.
+ */
+static List *
+OpenSequenceList(List *sequences)
+{
+ List *relids = NIL;
+ List *rels = NIL;
+ ListCell *lc;
+
+ /*
+ * Open, share-lock, and check all the explicitly-specified relations
+ */
+ foreach(lc, sequences)
+ {
+ PublicationTable *s = lfirst_node(PublicationTable, lc);
+ Relation rel;
+ Oid myrelid;
+ PublicationRelInfo *pub_rel;
+
+ /* Allow query cancel in case this takes a long time */
+ CHECK_FOR_INTERRUPTS();
+
+ rel = table_openrv(s->relation, ShareUpdateExclusiveLock);
+ myrelid = RelationGetRelid(rel);
+
+ /*
+ * Filter out duplicates if user specifies "foo, foo".
+ *
+ * Note that this algorithm is known to not be very efficient (O(N^2))
+ * but given that it only works on list of tables given to us by user
+ * it's deemed acceptable.
+ */
+ if (list_member_oid(relids, myrelid))
+ {
+ table_close(rel, ShareUpdateExclusiveLock);
+ continue;
+ }
+
+ pub_rel = palloc(sizeof(PublicationRelInfo));
+ pub_rel->relation = rel;
+ rels = lappend(rels, pub_rel);
+ relids = lappend_oid(relids, myrelid);
+ }
+
+ list_free(relids);
+
+ return rels;
+}
+
+/*
+ * Close all relations in the list.
+ */
+static void
+CloseSequenceList(List *rels)
+{
+ ListCell *lc;
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel;
+
+ pub_rel = (PublicationRelInfo *) lfirst(lc);
+ table_close(pub_rel->relation, NoLock);
+ }
+}
+
+/*
+ * Add listed tables to the publication.
+ */
+static void
+PublicationAddSequences(Oid pubid, List *rels, bool if_not_exists,
+ AlterPublicationStmt *stmt)
+{
+ ListCell *lc;
+
+ Assert(!stmt || !stmt->for_all_sequences);
+
+ foreach(lc, rels)
+ {
+ PublicationRelInfo *pub_rel = (PublicationRelInfo *) lfirst(lc);
+ Relation rel = pub_rel->relation;
+ ObjectAddress obj;
+
+ /* Must be owner of the sequence or superuser. */
+ if (!pg_class_ownercheck(RelationGetRelid(rel), GetUserId()))
+ aclcheck_error(ACLCHECK_NOT_OWNER, get_relkind_objtype(rel->rd_rel->relkind),
+ RelationGetRelationName(rel));
+
+ obj = publication_add_relation(pubid, pub_rel, if_not_exists);
+ if (stmt)
+ {
+ EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
+ (Node *) stmt);
+
+ InvokeObjectPostCreateHook(PublicationRelRelationId,
+ obj.objectId, 0);
+ }
+ }
+}
+
+/*
+ * Remove listed sequences from the publication.
+ */
+static void
+PublicationDropSequences(Oid pubid, List *rels, bool missing_ok)
+{
+ ObjectAddress obj;
+ ListCell *lc;
+ Oid prid;
+
+ foreach(lc, rels)
+ {
+ Relation rel = (Relation) lfirst(lc);
+ Oid relid = RelationGetRelid(rel);
+
+ prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,
+ ObjectIdGetDatum(relid),
+ ObjectIdGetDatum(pubid));
+ if (!OidIsValid(prid))
+ {
+ if (missing_ok)
+ continue;
+
+ ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+ errmsg("relation \"%s\" is not part of the publication",
+ RelationGetRelationName(rel))));
+ }
+
+ ObjectAddressSet(obj, PublicationRelRelationId, prid);
+ performDeletion(&obj, DROP_CASCADE, 0);
+ }
+}
+
+
/*
* Internal workhorse for changing a publication owner
*/
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index ab592ce2f15..fe4f21ec438 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -336,6 +336,85 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*/
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 3ef6607d246..5beb67e7652 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -85,6 +85,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -496,6 +497,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -534,6 +536,26 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -706,6 +728,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -797,6 +823,183 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1616,6 +1819,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 313c87398b2..78f14119d98 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -608,7 +608,7 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 6bd95bbce24..8b7e9710401 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4852,6 +4852,7 @@ _copyCreatePublicationStmt(const CreatePublicationStmt *from)
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
COPY_SCALAR_FIELD(for_all_tables);
+ COPY_SCALAR_FIELD(for_all_sequences);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 4126516222b..55a7dbbddf3 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2337,6 +2337,7 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
COMPARE_SCALAR_FIELD(action);
return true;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index c4f32425060..bbde765cf56 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -9771,6 +9771,26 @@ PublicationObjSpec:
$$->pubobjtype = PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA;
$$->location = @5;
}
+ | SEQUENCE relation_expr
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+ $$->pubtable = makeNode(PublicationTable);
+ $$->pubtable->relation = $2;
+ }
+ | ALL SEQUENCES IN_P SCHEMA ColId
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA;
+ $$->name = $5;
+ $$->location = @5;
+ }
+ | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA;
+ $$->location = @5;
+ }
| ColId
{
$$ = makeNode(PublicationObjSpec);
@@ -10106,6 +10126,12 @@ UnlistenStmt:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
@@ -10114,6 +10140,12 @@ UnlistenStmt:
* BEGIN / COMMIT / ROLLBACK
* (also older versions END / ABORT)
*
+ * ALTER PUBLICATION name ADD SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name DROP SEQUENCE sequence [, sequence2]
+ *
+ * ALTER PUBLICATION name SET SEQUENCE sequence [, sequence2]
+ *
*****************************************************************************/
TransactionStmt:
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 953942692ce..e8ead1387ae 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -647,6 +647,56 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1203,6 +1253,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index e596b69d466..f746c13ca44 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -359,6 +360,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -873,6 +880,95 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1134,10 +1230,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index d77bb32bb9e..cd04c686812 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -144,6 +144,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -1093,6 +1094,57 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by ResetSequence2 */
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2421,6 +2473,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 6df705f90ff..b82c6b10305 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -49,6 +49,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -161,6 +165,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -177,6 +182,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -190,6 +196,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -197,6 +204,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -262,6 +270,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -858,6 +876,51 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(rel))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, RelationGetRelid(rel));
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ rel,
+ xid,
+ sequence_lsn,
+ transactional,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1141,7 +1204,8 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->schema_sent = false;
entry->streamed_txns = NIL;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->publish_as_relid = InvalidOid;
entry->map = NULL; /* will be set by maybe_send_schema() if
* needed */
@@ -1163,6 +1227,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Oid publish_as_relid = relid;
bool am_partition = get_rel_relispartition(relid);
char relkind = get_rel_relkind(relid);
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1191,6 +1256,7 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
if (entry->map)
{
/*
@@ -1213,12 +1279,23 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1275,10 +1352,12 @@ get_rel_sync_entry(PGOutputData *data, Oid relid)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
}
if (entry->pubactions.pubinsert && entry->pubactions.pubupdate &&
- entry->pubactions.pubdelete && entry->pubactions.pubtruncate)
+ entry->pubactions.pubdelete && entry->pubactions.pubtruncate &&
+ entry->pubactions.pubsequence)
break;
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 2707fed12f4..45a8b3e490a 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5586,6 +5586,7 @@ GetRelationPublicationActions(Relation relation)
pubactions->pubupdate |= pubform->pubupdate;
pubactions->pubdelete |= pubform->pubdelete;
pubactions->pubtruncate |= pubform->pubtruncate;
+ pubactions->pubsequence |= pubform->pubsequence;
ReleaseSysCache(tup);
@@ -5594,7 +5595,8 @@ GetRelationPublicationActions(Relation relation)
* other publications.
*/
if (pubactions->pubinsert && pubactions->pubupdate &&
- pubactions->pubdelete && pubactions->pubtruncate)
+ pubactions->pubdelete && pubactions->pubtruncate &&
+ pubactions->pubsequence)
break;
}
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 98882272130..4900c48ff19 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1782,20 +1782,20 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
- else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") ||
- (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") &&
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE|SEQUENCE") ||
+ (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE|SEQUENCE") &&
ends_with(prev_wd, ',')))
COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);
- else if (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE"))
+ else if (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE|SEQUENCE"))
COMPLETE_WITH(",");
/* ALTER PUBLICATION <name> DROP */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
- COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE");
- else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
+ COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE|SEQUENCE");
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES|SEQUENCES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY_PLUS(Query_for_list_of_schemas
" AND nspname NOT LIKE E'pg\\\\_%'",
"CURRENT_SCHEMA");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 62f36daa981..a66eb8109d0 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11546,6 +11546,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 841b9b6c253..e56286772f4 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct Publication
@@ -79,6 +89,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -121,6 +132,9 @@ extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
Oid relid);
+extern List *GetAllSequencesPublicationRelations(void);
+extern List *GetPublicationSequenceRelations(Oid pubid);
+
extern bool is_publishable_relation(Relation rel);
extern bool is_schema_publication(Oid pubid);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *targetrel,
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9fecc41954e..d8c255a7af5 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 37fcc4c9b5a..4a990364e4a 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3656,6 +3656,10 @@ typedef enum PublicationObjSpecType
PUBLICATIONOBJ_TABLES_IN_SCHEMA, /* All tables in schema */
PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA, /* All tables in first element of
* search_path */
+ PUBLICATIONOBJ_SEQUENCE, /* Sequence type */
+ PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA, /* Sequences in schema type */
+ PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA, /* Get the first element of
+ * search_path */
PUBLICATIONOBJ_CONTINUATION /* Continuation of previous type */
} PublicationObjSpecType;
@@ -3675,6 +3679,7 @@ typedef struct CreatePublicationStmt
List *options; /* List of DefElem nodes */
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
} CreatePublicationStmt;
typedef enum AlterPublicationAction
@@ -3698,6 +3703,7 @@ typedef struct AlterPublicationStmt
*/
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
AlterPublicationAction action; /* What action to perform with the given
* objects */
} AlterPublicationStmt;
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 22fffaca62d..8f8c325522d 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -60,6 +60,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -117,6 +118,18 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -227,6 +240,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index 78aa9151ef5..a6f6843ada6 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -28,6 +28,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 1420288d67b..3f50e100f8d 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1429,6 +1429,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gpt(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gpt.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/subscription/t/028_sequences.pl b/src/test/subscription/t/028_sequences.pl
new file mode 100644
index 00000000000..06d05640589
--- /dev/null
+++ b/src/test/subscription/t/028_sequences.pl
@@ -0,0 +1,201 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More tests => 6;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub WITH (slot_name = seq_sub_slot)"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a rolled-back transaction - the rollback
+# does not wait for the replication, so we could see any intermediate state
+# so do something else after the test, to ensure we wait for everything
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+ INSERT INTO seq_test VALUES (-1);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the new sequence in a transaction, and roll it back - the rollback
+# does not wait for the replication, so we could see any intermediate state
+# so do something else after the test, to ensure we wait for everything
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+ INSERT INTO seq_test VALUES (-1);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+$node_subscriber->stop('fast');
+$node_publisher->stop('fast');
--
2.34.1
0002-tweak-test-20220213.patchtext/x-patch; charset=UTF-8; name=0002-tweak-test-20220213.patchDownload
From ef5b0959fe97ece5d657ce0aa3a75f85fc5daddf Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas@2ndquadrant.com>
Date: Sat, 12 Feb 2022 01:24:47 +0100
Subject: [PATCH 2/2] tweak test
---
src/test/subscription/t/028_sequences.pl | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/src/test/subscription/t/028_sequences.pl b/src/test/subscription/t/028_sequences.pl
index 06d05640589..38434409463 100644
--- a/src/test/subscription/t/028_sequences.pl
+++ b/src/test/subscription/t/028_sequences.pl
@@ -20,6 +20,7 @@ $node_subscriber->start;
# Create some preexisting content on publisher
my $ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
CREATE SEQUENCE s;
);
@@ -29,6 +30,7 @@ $node_publisher->safe_psql('postgres', $ddl);
# Create some the same structure on subscriber, and an extra sequence that
# we'll create on the publisher later
$ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
CREATE SEQUENCE s;
CREATE SEQUENCE s2;
);
@@ -59,7 +61,7 @@ $node_subscriber->poll_query_until('postgres', $synced_query)
$node_publisher->safe_psql(
'postgres', qq(
-- generate a number of values using the sequence
- SELECT nextval('s') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
));
$node_publisher->wait_for_catchup('seq_sub');
@@ -80,7 +82,7 @@ is( $result, '132|0|t',
$node_publisher->safe_psql(
'postgres', qq(
BEGIN;
- SELECT nextval('s') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
ROLLBACK;
INSERT INTO seq_test VALUES (-1);
));
@@ -104,7 +106,7 @@ $node_publisher->safe_psql(
BEGIN;
CREATE SEQUENCE s2;
ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
- SELECT nextval('s2') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
ROLLBACK;
));
@@ -128,7 +130,7 @@ $node_publisher->safe_psql(
CREATE SEQUENCE s2;
ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
SAVEPOINT sp1;
- SELECT nextval('s2') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
ROLLBACK TO sp1;
COMMIT;
));
@@ -157,7 +159,7 @@ is( $result, '132|0|t',
$node_publisher->safe_psql(
'postgres', qq(
BEGIN;
- SELECT nextval('s2') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
ROLLBACK;
INSERT INTO seq_test VALUES (-1);
));
@@ -180,7 +182,7 @@ $node_publisher->safe_psql(
'postgres', qq(
BEGIN;
SAVEPOINT s1;
- SELECT nextval('s2') FROM generate_series(1,100);
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
ROLLBACK TO s1;
COMMIT;
));
--
2.34.1
On 13.02.22 14:10, Tomas Vondra wrote:
Here's the remaining part, rebased, with a small tweak in the TAP test
to eliminate the issue with not waiting for sequence increments. I've
kept the tweak in a separate patch, so that we can throw it away easily
if we happen to resolve the issue.
This basically looks fine to me. You have identified a few XXX and
FIXME spots that should be addressed.
Here are a few more comments:
* general
Handling of owned sequences, as previously discussed. This would
probably be a localized change somewhere in get_rel_sync_entry(), so it
doesn't affect the overall structure of the patch.
pg_dump support is missing.
Some psql \dxxx support should probably be there. Check where existing
publication-table relationships are displayed.
* src/backend/catalog/system_views.sql
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
The GPT presumably stood for "get publication tables", so should be changed.
(There might be a few more copy-and-paste occasions like this around. I
have not written down all of them.)
* src/backend/commands/publicationcmds.c
This adds a new publication option called "sequence". I would have
expected it to be named "sequences", but that's just my opinion.
But in any case, the option is not documented at all.
Some of the new functions added in this file are almost exact duplicates
of the analogous functions for tables. For example,
PublicationAddSequences()/PublicationDropSequences() are almost
exactly the same as PublicationAddTables()/PublicationDropTables().
This could be handled with less duplication by just adding an ObjectType
argument to the existing functions.
* src/backend/commands/sequence.c
Could use some refactoring of ResetSequence()/ResetSequence2(). Maybe
call the latter ResetSequenceData() and have the former call it internally.
* src/backend/commands/subscriptioncmds.c
Also lots of duplication between tables and sequences in this file.
* src/backend/replication/logical/tablesync.c
The comment says it needs sequence support, but there appears to be
support for initial sequence syncing. What exactly is missing here?
* src/test/subscription/t/028_sequences.pl
Change to use done_testing() (see 549ec201d6132b7c7ee11ee90a4e02119259ba5b).
On 2/15/22 10:00, Peter Eisentraut wrote:
On 13.02.22 14:10, Tomas Vondra wrote:
Here's the remaining part, rebased, with a small tweak in the TAP test
to eliminate the issue with not waiting for sequence increments. I've
kept the tweak in a separate patch, so that we can throw it away easily
if we happen to resolve the issue.This basically looks fine to me. You have identified a few XXX and
FIXME spots that should be addressed.Here are a few more comments:
* general
Handling of owned sequences, as previously discussed. This would
probably be a localized change somewhere in get_rel_sync_entry(), so it
doesn't affect the overall structure of the patch.
So you're suggesting not to track owned sequences in pg_publication_rel
explicitly, and handle them dynamically in output plugin? So when
calling get_rel_sync_entry on the sequence, we'd check if it's owned by
a table that is replicated.
We'd want a way to enable/disable this for each publication, but that
makes the lookups more complicated. Also, we'd probably need the same
logic elsewhere (e.g. in psql, when listing sequences in a publication).
I'm not sure we want this complexity, maybe we should simply deal with
this in the ALTER PUBLICATION and track all sequences (owned or not) in
pg_publication_rel.
pg_dump support is missing.
Will fix.
Some psql \dxxx support should probably be there. Check where existing
publication-table relationships are displayed.
Yeah, I noticed this while adding regression tests. Currently, \dRp+
shows something like this:
Publication testpub_fortbl
Owner | All tables | Inserts | Updates ...
--------------------------+------------+---------+--------- ...
regress_publication_user | f | t | t ...
Tables:
"pub_test.testpub_nopk"
"public.testpub_tbl1"
or
Publication testpub5_forschema
Owner | All tables | Inserts | Updates | ...
--------------------------+------------+---------+---------+- ...
regress_publication_user | f | t | t | ...
Tables from schemas:
"CURRENT_SCHEMA"
"public"
I wonder if we should copy the same block for sequences, so
Publication testpub_fortbl
Owner | All tables | Inserts | Updates ...
--------------------------+------------+---------+--------- ...
regress_publication_user | f | t | t ...
Tables:
"pub_test.testpub_nopk"
"public.testpub_tbl1"
Sequences:
"pub_test.sequence_s1"
"public.sequence_s2"
And same for "tables from schemas" etc.
* src/backend/catalog/system_views.sql
+ LATERAL pg_get_publication_sequences(P.pubname) GPT,
The GPT presumably stood for "get publication tables", so should be
changed.(There might be a few more copy-and-paste occasions like this around. I
have not written down all of them.)
Will fix.
* src/backend/commands/publicationcmds.c
This adds a new publication option called "sequence". I would have
expected it to be named "sequences", but that's just my opinion.But in any case, the option is not documented at all.
Some of the new functions added in this file are almost exact duplicates
of the analogous functions for tables. For example,
PublicationAddSequences()/PublicationDropSequences() are almost
exactly the same as PublicationAddTables()/PublicationDropTables(). This
could be handled with less duplication by just adding an ObjectType
argument to the existing functions.
Yes, I noticed that too, and I'll review this later, after making sure
the behavior is correct.
* src/backend/commands/sequence.c
Could use some refactoring of ResetSequence()/ResetSequence2(). Maybe
call the latter ResetSequenceData() and have the former call it internally.
Will check.
* src/backend/commands/subscriptioncmds.c
Also lots of duplication between tables and sequences in this file.
Same as the case above.
* src/backend/replication/logical/tablesync.c
The comment says it needs sequence support, but there appears to be
support for initial sequence syncing. What exactly is missing here?
I think that's just obsolete comment.
* src/test/subscription/t/028_sequences.pl
Change to use done_testing() (see
549ec201d6132b7c7ee11ee90a4e02119259ba5b).
Will fix.
Do we need to handle both FOR ALL TABLES and FOR ALL SEQUENCES at the
same time? That seems like a reasonable thing people might want.
The patch probably also needs to modify pg_publication_namespace to
track whether the schema is FOR TABLES IN SCHEMA or FOR SEQUENCES.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Sat, Feb 19, 2022 at 7:48 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
Do we need to handle both FOR ALL TABLES and FOR ALL SEQUENCES at the
same time? That seems like a reasonable thing people might want.
+1. That seems reasonable to me as well. I think the same will apply
to FOR ALL TABLES IN SCHEMA and FOR ALL SEQUENCES IN SCHEMA.
--
With Regards,
Amit Kapila.
Hi,
Here's an updated version of the patch, fixing most of the issues from
reviews so far. There's still a couple FIXME comments, but I think those
are minor and/or straightforward to deal with.
The main improvements:
1) Elimination of a lot of redundant code - one function handling
tables, and an almost exact copy handling sequences. Now a single
function handles both, possibly with "sequences" flag to tweak the behavior.
2) A couple other functions gained "sequences" flag, determining which
objects are "interesting". For example PublicationAddSchemas needs to
know whether it's FOR ALL SEQUENCES or FOR ALL TABLES IN SCHEMA. I don't
think we can just use relkind here easily, because for tables we need to
handle various types of tables (regular, partitioned, ...).
3) I also renamed a couple functions with "tables" in the name, which
are now used for sequences too. So for example OpenTablesList() is
renamed to OpenRelationList() and so on.
4) Addition of a number of regression tests to "publication.sql", which
showed a lot of issues, mostly related to not distinguishing tables and
sequences when handling "FOR ALL TABLES [IN SCHEMA]" and "FOR ALL
SEQUENCES [IN SCHEMA]".
5) Proper tracking of FOR ALL [TABLES|SEQUENCES] IN SCHEMA in a catalog.
The pg_publication_namespace gained a pnsequences flag, which determines
which case it is. So for example if you do
ALTER PUBLICATION p ADD ALL TABLES IN SCHEMA s;
ALTER PUBLICATION p ADD ALL SEQUENCES IN SCHEMA s;
there will be two rows in the catalog, one with 't' and one with 'f' in
the new column. I'm not sure this is the best way to track this - maybe
it'd be better to have two flags, and keep a single row. Or maybe we
should have an array of relkinds (but that has the issue with tables
having multiple relkinds mentioned before). Multiple rows make it more
convenient to add/remove publication schemas - with a single row it'd be
necessary to either insert a new row or update an existing one when
adding the schema, and similarly for dropping it.
But maybe there are reasons / precedent to design this differently ...
6) I'm not entirely sure the object_address changes (handling of the
pnsequences flag) are correct.
7) This updates psql to do roughly the same thing as for tables, so \dRp
now list sequences added either directly or through schema, so you might
get footer like this:
\dRp+ testpub_mix
...
Tables:
"public.testpub_tbl1"
Tables from schemas:
"pub_test"
Sequences:
"public.testpub_seq1"
Sequences from schemas:
"pub_test"
Maybe it's a bit too verbose, though. It also addes "All sequences" and
"Sequences" columns into the publication description, but I don't think
that can be done much differently.
FWIW I had to switch the describeOneTableDetails() chunk dealing with
sequences from printQuery() to printTable() in order to handle dynamic
footers.
There's also a change in \dn+ because a schema may be included in one
publication as "FOR ALL SEQUENCES IN SCHEMA" and in another publication
with "FOR ALL TABLES IN SCHEMA". So I modified the output to
\dn+ pub_test1
...
Publications:
"testpub_schemas" (sequences)
"testpub_schemas" (tables)
But maybe it'd be better to aggregate this into a single line like
\dn+ pub_test1
...
Publications:
"testpub_schemas" (tables, sequences)
Opinions?
8) There's a shortcoming in the current grammar, because you can specify
either
CREATE PUBLICATION p FOR ALL TABLES;
or
CREATE PUBLICATION p FOR ALL SEQUENCES;
but it's not possible to do
CREATE PUBLICATION p FOR ALL TABLES AND FOR ALL SEQUENCES;
which seems like a fairly reasonable thing users might want to do.
The problem is that "FOR ALL TABLES" (and same for sequences) is
hard-coded in the grammar, not defined as PublicationObjSpec. This also
means you can't do
ALTER PUBLICATION p ADD ALL TABLES;
AFAICS there are two ways to fix this - adding the combinations into the
definition of CreatePublicationStmt, or adding FOR ALL TABLES (and
sequences) to PublicationObjSpec.
9) Another grammar limitation is that we don't cross-check the relkind,
so for example
ALTER PUBLICATION p ADD TABLE sequence;
might actually work. Should be easy to fix, though.
10) Added pg_dump support (including tests). I'll add more tests, to
check more grammar combinations.
11) I need to test more grammar combinations in the TAP test too, to
verify the output plugin interprets the stuff correctly.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Add-support-for-decoding-sequences-to-built-20220222.patchtext/x-patch; charset=UTF-8; name=0001-Add-support-for-decoding-sequences-to-built-20220222.patchDownload
From 66e38c7ef81b3a25db4f93d8d87f0e78b4d1bab7 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Thu, 10 Feb 2022 15:18:59 +0100
Subject: [PATCH] Add support for decoding sequences to built-in replication
---
doc/src/sgml/catalogs.sgml | 81 ++
doc/src/sgml/ref/alter_publication.sgml | 24 +-
doc/src/sgml/ref/alter_subscription.sgml | 3 +-
doc/src/sgml/ref/create_publication.sgml | 3 +
src/backend/catalog/objectaddress.c | 5 +-
src/backend/catalog/pg_publication.c | 236 ++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 380 +++++--
src/backend/commands/sequence.c | 79 ++
src/backend/commands/subscriptioncmds.c | 272 +++++
src/backend/executor/execReplication.c | 2 +-
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 1 +
src/backend/parser/gram.y | 40 +
src/backend/replication/logical/proto.c | 52 +
src/backend/replication/logical/tablesync.c | 118 ++-
src/backend/replication/logical/worker.c | 56 +
src/backend/replication/pgoutput/pgoutput.c | 82 +-
src/backend/utils/cache/relcache.c | 13 +-
src/backend/utils/cache/syscache.c | 6 +-
src/bin/pg_dump/pg_dump.c | 65 +-
src/bin/pg_dump/pg_dump.h | 3 +
src/bin/pg_dump/t/002_pg_dump.pl | 36 +-
src/bin/psql/describe.c | 292 +++--
src/bin/psql/tab-complete.c | 12 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 25 +-
.../catalog/pg_publication_namespace.h | 3 +-
src/include/commands/sequence.h | 1 +
src/include/nodes/parsenodes.h | 6 +
src/include/replication/logicalproto.h | 19 +
src/include/replication/pgoutput.h | 1 +
src/interfaces/libpq/fe-exec.c | 9 +
src/test/regress/expected/publication.out | 999 ++++++++++++++----
src/test/regress/expected/rules.out | 8 +
src/test/regress/sql/object_address.sql | 1 -
src/test/regress/sql/publication.sql | 220 +++-
src/test/subscription/t/028_sequences.pl | 203 ++++
38 files changed, 2984 insertions(+), 388 deletions(-)
create mode 100644 src/test/subscription/t/028_sequences.pl
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 83987a99045..324fc1050d3 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6263,6 +6263,16 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
Reference to schema
</para></entry>
</row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pnsequences</structfield> <type>bool</type>
+ Determines whether to include tables or sequences from this schema.
+ </para>
+ <para>
+ Reference to schema
+ </para></entry>
+ </row>
</tbody>
</tgroup>
</table>
@@ -9560,6 +9570,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11395,6 +11410,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index 32b75f6c78e..36c9a5f438a 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -58,7 +60,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -122,6 +135,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><replaceable class="parameter">schema_name</replaceable></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 0d6f064f58d..264c9c5a10f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -147,7 +147,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ replication of tables and sequences that were added to the subscribed-to publications
since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -168,6 +168,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<para>
Previously subscribed tables are not copied, even if a table's row
filter <literal>WHERE</literal> clause has since been modified.
+ Previously-subscribed sequences are not copied either.
</para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 4979b9b646d..f72318e97dd 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -23,13 +23,16 @@ PostgreSQL documentation
<synopsis>
CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
[ FOR ALL TABLES
+ | FOR ALL SEQUENCES
| FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
[ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c
index f30c742d48f..301eeb7caca 100644
--- a/src/backend/catalog/objectaddress.c
+++ b/src/backend/catalog/objectaddress.c
@@ -1976,10 +1976,11 @@ get_object_address_publication_schema(List *object, bool missing_ok)
/* Find the publication schema mapping in syscache */
address.objectId =
- GetSysCacheOid2(PUBLICATIONNAMESPACEMAP,
+ GetSysCacheOid3(PUBLICATIONNAMESPACEMAP,
Anum_pg_publication_namespace_oid,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pub->oid));
+ ObjectIdGetDatum(pub->oid),
+ BoolGetDatum(false)); /* FIXME */
if (!OidIsValid(address.objectId) && !missing_ok)
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 25998fbb39b..d866e8a9b22 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -54,7 +54,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -131,7 +132,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -205,7 +207,7 @@ is_schema_publication(Oid pubid)
ObjectIdGetDatum(pubid));
scan = systable_beginscan(pubschsrel,
- PublicationNamespacePnnspidPnpubidIndexId,
+ PublicationNamespacePnnspidPnpubidSeqIndexId,
true, NULL, 1, &scankey);
tup = systable_getnext(scan);
result = HeapTupleIsValid(tup);
@@ -419,7 +421,7 @@ publication_add_relation(Oid pubid, PublicationRelInfo *pri,
* Insert new publication / schema mapping.
*/
ObjectAddress
-publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
+publication_add_schema(Oid pubid, Oid schemaid, bool sequences, bool if_not_exists)
{
Relation rel;
HeapTuple tup;
@@ -438,9 +440,10 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
* duplicates, it's here just to provide nicer error message in common
* case. The real protection is the unique key on the catalog.
*/
- if (SearchSysCacheExists2(PUBLICATIONNAMESPACEMAP,
+ if (SearchSysCacheExists3(PUBLICATIONNAMESPACEMAP,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pubid)))
+ ObjectIdGetDatum(pubid),
+ BoolGetDatum(sequences)))
{
table_close(rel, RowExclusiveLock);
@@ -466,6 +469,8 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
ObjectIdGetDatum(pubid);
values[Anum_pg_publication_namespace_pnnspid - 1] =
ObjectIdGetDatum(schemaid);
+ values[Anum_pg_publication_namespace_pnsequences - 1] =
+ BoolGetDatum(sequences);
tup = heap_form_tuple(RelationGetDescr(rel), values, nulls);
@@ -492,6 +497,7 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
* partitions.
*/
schemaRels = GetSchemaPublicationRelations(schemaid,
+ sequences,
PUBLICATION_PART_ALL);
InvalidatePublicationRels(schemaRels);
@@ -525,11 +531,14 @@ GetRelationPublications(Oid relid)
/*
* Gets list of relation oids for a publication.
*
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE / FOR SEQUENCE publications, the FOR
+ * ALL TABLES / SEQUENCES should use GetAllTablesPublicationRelations()
+ * and GetAllSequencesPublicationRelations().
+ *
+ * XXX It's a bit weird the pub_partopt does not really matter for sequences.
*/
List *
-GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
+GetPublicationRelations(Oid pubid, bool sequences, PublicationPartOpt pub_partopt)
{
List *result;
Relation pubrelsrel;
@@ -551,11 +560,21 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
result = NIL;
while (HeapTupleIsValid(tup = systable_getnext(scan)))
{
+ char relkind;
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
- result = GetPubPartitionOptionRelations(result, pub_partopt,
- pubrel->prrelid);
+ relkind = get_rel_relkind(pubrel->prrelid);
+
+ /*
+ * Handle tables and sequences, depending on the combination of relkind
+ * and sequences flag. If there's a mismatch, we ignore the relation.
+ */
+ if (sequences && (relkind == RELKIND_SEQUENCE))
+ result = lappend_oid(result, pubrel->prrelid);
+ else if (!sequences && (relkind != RELKIND_SEQUENCE))
+ result = GetPubPartitionOptionRelations(result, pub_partopt,
+ pubrel->prrelid);
}
systable_endscan(scan);
@@ -605,6 +624,43 @@ GetAllTablesPublications(void)
return result;
}
+/*
+ * Gets list of publication oids for publications marked as FOR ALL SEQUENCES.
+ */
+List *
+GetAllSequencesPublications(void)
+{
+ List *result;
+ Relation rel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications that are marked as for all tables. */
+ rel = table_open(PublicationRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_puballsequences,
+ BTEqualStrategyNumber, F_BOOLEQ,
+ BoolGetDatum(true));
+
+ scan = systable_beginscan(rel, InvalidOid, false,
+ NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Oid oid = ((Form_pg_publication) GETSTRUCT(tup))->oid;
+
+ result = lappend_oid(result, oid);
+ }
+
+ systable_endscan(scan);
+ table_close(rel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of all relation published by FOR ALL TABLES publication(s).
*
@@ -671,28 +727,36 @@ GetAllTablesPublicationRelations(bool pubviaroot)
/*
* Gets the list of schema oids for a publication.
*
- * This should only be used FOR ALL TABLES IN SCHEMA publications.
+ * This should only be used FOR ALL TABLES IN SCHEMA and FOR ALL SEQUENCES
+ * publications.
+ *
+ * 'sequences' determines whether to get FOR TABLE or FOR SEQUENCES schemas
*/
List *
-GetPublicationSchemas(Oid pubid)
+GetPublicationSchemas(Oid pubid, bool sequences)
{
List *result = NIL;
Relation pubschsrel;
- ScanKeyData scankey;
+ ScanKeyData scankey[2];
SysScanDesc scan;
HeapTuple tup;
/* Find all schemas associated with the publication */
pubschsrel = table_open(PublicationNamespaceRelationId, AccessShareLock);
- ScanKeyInit(&scankey,
+ ScanKeyInit(&scankey[0],
Anum_pg_publication_namespace_pnpubid,
BTEqualStrategyNumber, F_OIDEQ,
ObjectIdGetDatum(pubid));
+ ScanKeyInit(&scankey[1],
+ Anum_pg_publication_namespace_pnsequences,
+ BTEqualStrategyNumber, F_BOOLEQ,
+ BoolGetDatum(sequences));
+
scan = systable_beginscan(pubschsrel,
- PublicationNamespacePnnspidPnpubidIndexId,
- true, NULL, 1, &scankey);
+ PublicationNamespacePnnspidPnpubidSeqIndexId,
+ true, NULL, 2, scankey);
while (HeapTupleIsValid(tup = systable_getnext(scan)))
{
Form_pg_publication_namespace pubsch;
@@ -710,6 +774,9 @@ GetPublicationSchemas(Oid pubid)
/*
* Gets the list of publication oids associated with a specified schema.
+ *
+ * XXX probably should handle pnsequences somehow, either by taking a
+ * parameter or returning the flag, somehow
*/
List *
GetSchemaPublications(Oid schemaid)
@@ -738,7 +805,8 @@ GetSchemaPublications(Oid schemaid)
* Get the list of publishable relation oids for a specified schema.
*/
List *
-GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
+GetSchemaPublicationRelations(Oid schemaid, bool sequences,
+ PublicationPartOpt pub_partopt)
{
Relation classRel;
ScanKeyData key[1];
@@ -763,11 +831,19 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
Oid relid = relForm->oid;
char relkind;
+ /* FIXME need to differentiate tables and sequences */
if (!is_publishable_class(relid, relForm))
continue;
relkind = get_rel_relkind(relid);
- if (relkind == RELKIND_RELATION)
+
+ /* Filter by relkind depending on FOR ALL TABLES / SEQUENCES */
+ if ((sequences && relkind != RELKIND_SEQUENCE) ||
+ (!sequences && relkind == RELKIND_SEQUENCE))
+ continue;
+
+ if ((relkind == RELKIND_RELATION) ||
+ (relkind == RELKIND_SEQUENCE))
result = lappend_oid(result, relid);
else if (relkind == RELKIND_PARTITIONED_TABLE)
{
@@ -792,13 +868,14 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
/*
* Gets the list of all relations published by FOR ALL TABLES IN SCHEMA
- * publication.
+ * or FOR ALL SEQUENCES IN SCHEMA publication.
*/
List *
-GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
+GetAllSchemaPublicationRelations(Oid pubid, bool sequences,
+ PublicationPartOpt pub_partopt)
{
List *result = NIL;
- List *pubschemalist = GetPublicationSchemas(pubid);
+ List *pubschemalist = GetPublicationSchemas(pubid, sequences);
ListCell *cell;
foreach(cell, pubschemalist)
@@ -806,13 +883,54 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
Oid schemaid = lfirst_oid(cell);
List *schemaRels = NIL;
- schemaRels = GetSchemaPublicationRelations(schemaid, pub_partopt);
+ schemaRels = GetSchemaPublicationRelations(schemaid, sequences,
+ pub_partopt);
result = list_concat(result, schemaRels);
}
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -835,10 +953,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -949,10 +1069,12 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
*schemarelids;
relids = GetPublicationRelations(publication->oid,
+ false, /* tables only */
publication->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
schemarelids = GetAllSchemaPublicationRelations(publication->oid,
+ false, /* tables only */
publication->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
@@ -988,3 +1110,71 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ {
+ List *relids,
+ *schemarelids;
+
+ relids = GetPublicationRelations(publication->oid,
+ true, /* sequences */
+ publication->pubviaroot ?
+ PUBLICATION_PART_ROOT :
+ PUBLICATION_PART_LEAF);
+ schemarelids = GetAllSchemaPublicationRelations(publication->oid,
+ true, /* sequences only */
+ publication->pubviaroot ?
+ PUBLICATION_PART_ROOT :
+ PUBLICATION_PART_LEAF);
+ sequences = list_concat_unique_oid(relids, schemarelids);
+ }
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 3cb69b1f87b..072cfb2655a 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -374,6 +374,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPS,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPS.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 16b8661a1b7..db78dc86646 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -67,15 +68,16 @@ typedef struct rf_context
} rf_context;
static List *OpenRelIdList(List *relids);
-static List *OpenTableList(List *tables);
-static void CloseTableList(List *rels);
+static List *OpenRelationList(List *rels);
+static void CloseRelationList(List *rels);
static void LockSchemaList(List *schemalist);
-static void PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
+static void PublicationAddRelations(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt);
-static void PublicationDropTables(Oid pubid, List *rels, bool missing_ok);
-static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
- AlterPublicationStmt *stmt);
-static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static void PublicationDropRelations(Oid pubid, List *rels, bool missing_ok);
+static void PublicationAddSchemas(Oid pubid, List *schemas, bool sequences,
+ bool if_not_exists, AlterPublicationStmt *stmt);
+static void PublicationDropSchemas(Oid pubid, List *schemas, bool sequences, bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
@@ -95,6 +97,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -119,6 +122,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -141,6 +145,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -167,7 +173,9 @@ parse_publication_options(ParseState *pstate,
*/
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas,
+ List **schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -185,12 +193,23 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLES_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA:
+ schemaid = get_namespace_oid(pubobj->name, false);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA:
@@ -204,6 +223,21 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
list_free(search_path);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA:
+ search_path = fetch_search_path(false);
+ if (search_path == NIL) /* nothing valid in search_path? */
+ ereport(ERROR,
+ errcode(ERRCODE_UNDEFINED_SCHEMA),
+ errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+ schemaid = linitial_oid(search_path);
+ list_free(search_path);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
default:
@@ -240,6 +274,14 @@ CheckObjSchemaNotAlreadyInPublication(List *rels, List *schemaidlist,
errdetail("Table \"%s\" in schema \"%s\" is already part of the publication, adding the same schema is not supported.",
RelationGetRelationName(rel),
get_namespace_name(relSchemaId)));
+ else if (checkobjtype == PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA)
+ ereport(ERROR,
+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("cannot add schema \"%s\" to publication",
+ get_namespace_name(relSchemaId)),
+ errdetail("SEquence \"%s\" in schema \"%s\" is already part of the publication, adding the same schema is not supported.",
+ RelationGetRelationName(rel),
+ get_namespace_name(relSchemaId)));
else if (checkobjtype == PUBLICATIONOBJ_TABLE)
ereport(ERROR,
errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -248,6 +290,14 @@ CheckObjSchemaNotAlreadyInPublication(List *rels, List *schemaidlist,
RelationGetRelationName(rel)),
errdetail("Table's schema \"%s\" is already part of the publication or part of the specified schema list.",
get_namespace_name(relSchemaId)));
+ else if (checkobjtype == PUBLICATIONOBJ_SEQUENCE)
+ ereport(ERROR,
+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("cannot add relation \"%s.%s\" to publication",
+ get_namespace_name(relSchemaId),
+ RelationGetRelationName(rel)),
+ errdetail("Sequence's schema \"%s\" is already part of the publication or part of the specified schema list.",
+ get_namespace_name(relSchemaId)));
}
}
}
@@ -625,7 +675,10 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
bool publish_via_partition_root_given;
bool publish_via_partition_root;
AclResult aclresult;
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
/* must have CREATE privilege on database */
@@ -672,6 +725,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
values[Anum_pg_publication_puballtables - 1] =
BoolGetDatum(stmt->for_all_tables);
+ values[Anum_pg_publication_puballsequences - 1] =
+ BoolGetDatum(stmt->for_all_sequences);
values[Anum_pg_publication_pubinsert - 1] =
BoolGetDatum(pubactions.pubinsert);
values[Anum_pg_publication_pubupdate - 1] =
@@ -680,6 +735,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -704,38 +761,62 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
}
else
{
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
- if (list_length(schemaidlist) > 0 && !superuser())
+ if (list_length(tables_schemaidlist) > 0 && !superuser())
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES IN SCHEMA publication"));
- if (list_length(relations) > 0)
+ if (list_length(tables) > 0)
{
List *rels;
- rels = OpenTableList(relations);
- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ rels = OpenRelationList(tables);
+ CheckObjSchemaNotAlreadyInPublication(rels, tables_schemaidlist,
PUBLICATIONOBJ_TABLE);
TransformPubWhereClauses(rels, pstate->p_sourcetext,
publish_via_partition_root);
- PublicationAddTables(puboid, rels, true, NULL);
- CloseTableList(rels);
+ PublicationAddRelations(puboid, rels, true, NULL);
+ CloseRelationList(rels);
}
- if (list_length(schemaidlist) > 0)
+ if (list_length(sequences) > 0)
+ {
+ List *rels;
+
+ rels = OpenRelationList(sequences);
+ CheckObjSchemaNotAlreadyInPublication(rels, sequences_schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddRelations(puboid, rels, true, NULL);
+ CloseRelationList(rels);
+ }
+
+ if (list_length(tables_schemaidlist) > 0)
{
/*
* Schema lock is held until the publication is created to prevent
* concurrent schema deletion.
*/
- LockSchemaList(schemaidlist);
- PublicationAddSchemas(puboid, schemaidlist, true, NULL);
+ LockSchemaList(tables_schemaidlist);
+ PublicationAddSchemas(puboid, tables_schemaidlist, false, true, NULL);
+ }
+
+ if (list_length(sequences_schemaidlist) > 0)
+ {
+ /*
+ * Schema lock is held until the publication is created to prevent
+ * concurrent schema deletion.
+ */
+ LockSchemaList(sequences_schemaidlist);
+ PublicationAddSchemas(puboid, sequences_schemaidlist, true, true, NULL);
}
}
@@ -797,7 +878,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
LockDatabaseObject(PublicationRelationId, pubform->oid, 0,
AccessShareLock);
- root_relids = GetPublicationRelations(pubform->oid,
+ root_relids = GetPublicationRelations(pubform->oid, false,
PUBLICATION_PART_ROOT);
foreach(lc, root_relids)
@@ -856,6 +937,9 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
values[Anum_pg_publication_pubtruncate - 1] = BoolGetDatum(pubactions.pubtruncate);
replaces[Anum_pg_publication_pubtruncate - 1] = true;
+
+ values[Anum_pg_publication_pubsequence - 1] = BoolGetDatum(pubactions.pubsequence);
+ replaces[Anum_pg_publication_pubsequence - 1] = true;
}
if (publish_via_partition_root_given)
@@ -875,7 +959,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
pubform = (Form_pg_publication) GETSTRUCT(tup);
/* Invalidate the relcache. */
- if (pubform->puballtables)
+ if (pubform->puballtables) /* FIXME probably needs to handle puballsequences too? */
{
CacheInvalidateRelcacheAll();
}
@@ -890,7 +974,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
* trees, not just those explicitly mentioned in the publication.
*/
if (root_relids == NIL)
- relids = GetPublicationRelations(pubform->oid,
+ relids = GetPublicationRelations(pubform->oid, false,
PUBLICATION_PART_ALL);
else
{
@@ -904,7 +988,17 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
lfirst_oid(lc));
}
- schemarelids = GetAllSchemaPublicationRelations(pubform->oid,
+ /* tables */
+ schemarelids = GetAllSchemaPublicationRelations(pubform->oid, false,
+ PUBLICATION_PART_ALL);
+ relids = list_concat_unique_oid(relids, schemarelids);
+
+ /* sequences */
+ relids = list_concat_unique_oid(relids,
+ GetPublicationRelations(pubform->oid,
+ true, PUBLICATION_PART_ALL));
+
+ schemarelids = GetAllSchemaPublicationRelations(pubform->oid, true,
PUBLICATION_PART_ALL);
relids = list_concat_unique_oid(relids, schemarelids);
@@ -951,6 +1045,11 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
Oid pubid = pubform->oid;
+ /*
+ * FIXME Do we need to test relkind of the relations? Otherwise we
+ * can do "ADD TABLE sequence" and it'll "work".
+ */
+
/*
* Nothing to do if no objects, except in SET: for that it is quite
* possible that user has not specified any tables in which case we need
@@ -959,7 +1058,7 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
if (!tables && stmt->action != AP_SetObjects)
return;
- rels = OpenTableList(tables);
+ rels = OpenRelationList(tables);
if (stmt->action == AP_AddObjects)
{
@@ -969,19 +1068,19 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
* Check if the relation is member of the existing schema in the
* publication or member of the schema list specified.
*/
- schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid));
+ schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid, false));
CheckObjSchemaNotAlreadyInPublication(rels, schemas,
PUBLICATIONOBJ_TABLE);
TransformPubWhereClauses(rels, queryString, pubform->pubviaroot);
- PublicationAddTables(pubid, rels, false, stmt);
+ PublicationAddRelations(pubid, rels, false, stmt);
}
else if (stmt->action == AP_DropObjects)
- PublicationDropTables(pubid, rels, false);
+ PublicationDropRelations(pubid, rels, false);
else /* AP_SetObjects */
{
- List *oldrelids = GetPublicationRelations(pubid,
+ List *oldrelids = GetPublicationRelations(pubid, false,
PUBLICATION_PART_ROOT);
List *delrels = NIL;
ListCell *oldlc;
@@ -1054,6 +1153,10 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
*/
if (!found)
{
+ /* don't drop sequences (not in list of tables) */
+ if (get_rel_relkind(oldrelid) == RELKIND_SEQUENCE)
+ continue;
+
oldrel = palloc(sizeof(PublicationRelInfo));
oldrel->whereClause = NULL;
oldrel->relation = table_open(oldrelid,
@@ -1063,18 +1166,18 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
}
/* And drop them. */
- PublicationDropTables(pubid, delrels, true);
+ PublicationDropRelations(pubid, delrels, true);
/*
* Don't bother calculating the difference for adding, we'll catch and
* skip existing ones when doing catalog update.
*/
- PublicationAddTables(pubid, rels, true, stmt);
+ PublicationAddRelations(pubid, rels, true, stmt);
- CloseTableList(delrels);
+ CloseRelationList(delrels);
}
- CloseTableList(rels);
+ CloseRelationList(rels);
}
/*
@@ -1084,7 +1187,8 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
*/
static void
AlterPublicationSchemas(AlterPublicationStmt *stmt,
- HeapTuple tup, List *schemaidlist)
+ HeapTuple tup, List *schemaidlist,
+ bool sequences)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
@@ -1106,20 +1210,20 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
List *rels;
List *reloids;
- reloids = GetPublicationRelations(pubform->oid, PUBLICATION_PART_ROOT);
+ reloids = GetPublicationRelations(pubform->oid, sequences, PUBLICATION_PART_ROOT);
rels = OpenRelIdList(reloids);
CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
PUBLICATIONOBJ_TABLES_IN_SCHEMA);
- CloseTableList(rels);
- PublicationAddSchemas(pubform->oid, schemaidlist, false, stmt);
+ CloseRelationList(rels);
+ PublicationAddSchemas(pubform->oid, schemaidlist, sequences, false, stmt);
}
else if (stmt->action == AP_DropObjects)
- PublicationDropSchemas(pubform->oid, schemaidlist, false);
+ PublicationDropSchemas(pubform->oid, schemaidlist, sequences, false);
else /* AP_SetObjects */
{
- List *oldschemaids = GetPublicationSchemas(pubform->oid);
+ List *oldschemaids = GetPublicationSchemas(pubform->oid, sequences);
List *delschemas = NIL;
/* Identify which schemas should be dropped */
@@ -1132,13 +1236,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
LockSchemaList(delschemas);
/* And drop them */
- PublicationDropSchemas(pubform->oid, delschemas, true);
+ PublicationDropSchemas(pubform->oid, delschemas, sequences, true);
/*
* Don't bother calculating the difference for adding, we'll catch and
* skip existing ones when doing catalog update.
*/
- PublicationAddSchemas(pubform->oid, schemaidlist, true, stmt);
+ PublicationAddSchemas(pubform->oid, schemaidlist, sequences, true, stmt);
}
}
@@ -1148,12 +1252,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
*/
static void
CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
- List *tables, List *schemaidlist)
+ List *tables, List *tables_schemaidlist,
+ List *sequences, List *sequences_schemaidlist)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
- schemaidlist && !superuser())
+ (tables_schemaidlist || sequences_schemaidlist) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to add or set schemas")));
@@ -1162,13 +1267,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
* Check that user is allowed to manipulate the publication tables in
* schema
*/
- if (schemaidlist && pubform->puballtables)
+ if (tables_schemaidlist && pubform->puballtables)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables from schema cannot be added to, dropped from, or set on FOR ALL TABLES publications.")));
+ /*
+ * Check that user is allowed to manipulate the publication sequences in
+ * schema
+ */
+ if (sequences_schemaidlist && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
/* Check that user is allowed to manipulate the publication tables. */
if (tables && pubform->puballtables)
ereport(ERROR,
@@ -1176,6 +1292,118 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (sequences && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+ List *sequences, List *schemaidlist)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /*
+ * FIXME Do we need to test relkind of the relations? Otherwise we
+ * can do "ADD SEQUENCE table" and it'll "work".
+ */
+
+ /*
+ * It is quite possible that for the SET case user has not specified any
+ * tables in which case we need to remove all the existing tables.
+ */
+ if (!sequences && stmt->action != AP_SetObjects)
+ return;
+
+ rels = OpenRelationList(sequences);
+
+ if (stmt->action == AP_AddObjects)
+ {
+ List *schemas = NIL;
+
+ /*
+ * Check if the relation is member of the existing schema in the
+ * publication or member of the schema list specified.
+ */
+ schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid, true));
+ CheckObjSchemaNotAlreadyInPublication(rels, schemas,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddRelations(pubid, rels, false, stmt);
+ }
+ else if (stmt->action == AP_DropObjects)
+ PublicationDropRelations(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid, true,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ PublicationRelInfo *newpubrel;
+
+ newpubrel = (PublicationRelInfo *) lfirst(newlc);
+ if (RelationGetRelid(newpubrel->relation) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+ /* Not yet in the list, open it and add to the list */
+ if (!found)
+ {
+ Relation oldrel;
+ PublicationRelInfo *pubrel;
+
+ /* don't drop non-sequences (not in list of sequences) */
+ if (get_rel_relkind(oldrelid) != RELKIND_SEQUENCE)
+ continue;
+
+ /* Wrap relation into PublicationRelInfo */
+ oldrel = table_open(oldrelid, ShareUpdateExclusiveLock);
+
+ pubrel = palloc(sizeof(PublicationRelInfo));
+ pubrel->relation = oldrel;
+ pubrel->whereClause = NULL;
+
+ delrels = lappend(delrels, pubrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropRelations(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddRelations(pubid, rels, true, stmt);
+
+ CloseRelationList(delrels);
+ }
+
+ CloseRelationList(rels);
}
/*
@@ -1213,14 +1441,22 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
AlterPublicationOptions(pstate, stmt, rel, tup);
else
{
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
Oid pubid = pubform->oid;
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
- CheckAlterPublication(stmt, tup, relations, schemaidlist);
+ CheckAlterPublication(stmt, tup,
+ tables, tables_schemaidlist,
+ sequences, sequences_schemaidlist);
heap_freetuple(tup);
@@ -1248,9 +1484,14 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
errmsg("publication \"%s\" does not exist",
stmt->pubname));
- AlterPublicationTables(stmt, tup, relations, schemaidlist,
+ AlterPublicationTables(stmt, tup, tables, tables_schemaidlist,
pstate->p_sourcetext);
- AlterPublicationSchemas(stmt, tup, schemaidlist);
+
+ AlterPublicationSequences(stmt, tup, sequences, sequences_schemaidlist);
+
+ AlterPublicationSchemas(stmt, tup, tables_schemaidlist, false);
+
+ AlterPublicationSchemas(stmt, tup, sequences_schemaidlist, true);
}
/* Cleanup. */
@@ -1318,7 +1559,7 @@ RemovePublicationById(Oid pubid)
pubform = (Form_pg_publication) GETSTRUCT(tup);
/* Invalidate relcache so that publication info is rebuilt. */
- if (pubform->puballtables)
+ if (pubform->puballtables) /* FIXME handle puballsequences too? */
CacheInvalidateRelcacheAll();
CatalogTupleDelete(rel, &tup->t_self);
@@ -1354,6 +1595,7 @@ RemovePublicationSchemaById(Oid psoid)
* partitions.
*/
schemaRels = GetSchemaPublicationRelations(pubsch->pnnspid,
+ pubsch->pnsequences,
PUBLICATION_PART_ALL);
InvalidatePublicationRels(schemaRels);
@@ -1396,17 +1638,17 @@ OpenRelIdList(List *relids)
* add them to a publication.
*/
static List *
-OpenTableList(List *tables)
+OpenRelationList(List *rels)
{
List *relids = NIL;
- List *rels = NIL;
+ List *result = NIL;
ListCell *lc;
List *relids_with_rf = NIL;
/*
* Open, share-lock, and check all the explicitly-specified relations
*/
- foreach(lc, tables)
+ foreach(lc, rels)
{
PublicationTable *t = lfirst_node(PublicationTable, lc);
bool recurse = t->relation->inh;
@@ -1443,7 +1685,7 @@ OpenTableList(List *tables)
pub_rel = palloc(sizeof(PublicationRelInfo));
pub_rel->relation = rel;
pub_rel->whereClause = t->whereClause;
- rels = lappend(rels, pub_rel);
+ result = lappend(result, pub_rel);
relids = lappend_oid(relids, myrelid);
if (t->whereClause)
@@ -1497,7 +1739,7 @@ OpenTableList(List *tables)
pub_rel->relation = rel;
/* child inherits WHERE clause from parent */
pub_rel->whereClause = t->whereClause;
- rels = lappend(rels, pub_rel);
+ result = lappend(result, pub_rel);
relids = lappend_oid(relids, childrelid);
if (t->whereClause)
@@ -1509,14 +1751,14 @@ OpenTableList(List *tables)
list_free(relids);
list_free(relids_with_rf);
- return rels;
+ return result;
}
/*
* Close all relations in the list.
*/
static void
-CloseTableList(List *rels)
+CloseRelationList(List *rels)
{
ListCell *lc;
@@ -1564,7 +1806,7 @@ LockSchemaList(List *schemalist)
* Add listed tables to the publication.
*/
static void
-PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
+PublicationAddRelations(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt)
{
ListCell *lc;
@@ -1598,7 +1840,7 @@ PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
* Remove listed tables from the publication.
*/
static void
-PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
+PublicationDropRelations(Oid pubid, List *rels, bool missing_ok)
{
ObjectAddress obj;
ListCell *lc;
@@ -1638,8 +1880,8 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
* Add listed schemas to the publication.
*/
static void
-PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
- AlterPublicationStmt *stmt)
+PublicationAddSchemas(Oid pubid, List *schemas, bool sequences,
+ bool if_not_exists, AlterPublicationStmt *stmt)
{
ListCell *lc;
@@ -1650,7 +1892,7 @@ PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
Oid schemaid = lfirst_oid(lc);
ObjectAddress obj;
- obj = publication_add_schema(pubid, schemaid, if_not_exists);
+ obj = publication_add_schema(pubid, schemaid, sequences, if_not_exists);
if (stmt)
{
EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
@@ -1666,7 +1908,7 @@ PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
* Remove listed schemas from the publication.
*/
static void
-PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
+PublicationDropSchemas(Oid pubid, List *schemas, bool sequences, bool missing_ok)
{
ObjectAddress obj;
ListCell *lc;
@@ -1676,10 +1918,11 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
{
Oid schemaid = lfirst_oid(lc);
- psid = GetSysCacheOid2(PUBLICATIONNAMESPACEMAP,
+ psid = GetSysCacheOid3(PUBLICATIONNAMESPACEMAP,
Anum_pg_publication_namespace_oid,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pubid));
+ ObjectIdGetDatum(pubid),
+ BoolGetDatum(sequences));
if (!OidIsValid(psid))
{
if (missing_ok)
@@ -1734,6 +1977,13 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
NameStr(form->pubname)),
errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+ if (form->puballsequences && !superuser_arg(newOwnerId))
+ ereport(ERROR,
+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("permission denied to change owner of publication \"%s\"",
+ NameStr(form->pubname)),
+ errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser.")));
+
if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index ab592ce2f15..fe4f21ec438 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -336,6 +336,85 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*/
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 3ef6607d246..5beb67e7652 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -85,6 +85,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -496,6 +497,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -534,6 +536,26 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -706,6 +728,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -797,6 +823,183 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1616,6 +1819,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index de106d767d1..f8f8ade180c 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -635,7 +635,7 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index d4f8455a2bd..cc936328a32 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4863,6 +4863,7 @@ _copyCreatePublicationStmt(const CreatePublicationStmt *from)
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
COPY_SCALAR_FIELD(for_all_tables);
+ COPY_SCALAR_FIELD(for_all_sequences);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index f1002afe7a0..692cf0e1352 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2346,6 +2346,7 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
COMPARE_SCALAR_FIELD(action);
return true;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a03b33b53bd..9097ac3fabd 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -9701,12 +9701,16 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
*
* CREATE PUBLICATION FOR ALL TABLES [WITH options]
*
+ * CREATE PUBLICATION FOR ALL SEQUENCES [WITH options]
+ *
* CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
*
* pub_obj is one of:
*
* TABLE table [, ...]
+ * SEQUENCE table [, ...]
* ALL TABLES IN SCHEMA schema [, ...]
+ * ALL SEQUENCES IN SCHEMA schema [, ...]
*
*****************************************************************************/
@@ -9726,6 +9730,14 @@ CreatePublicationStmt:
n->for_all_tables = true;
$$ = (Node *)n;
}
+ | CREATE PUBLICATION name FOR ALL SEQUENCES opt_definition
+ {
+ CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
+ n->pubname = $3;
+ n->options = $7;
+ n->for_all_sequences = true;
+ $$ = (Node *)n;
+ }
| CREATE PUBLICATION name FOR pub_obj_list opt_definition
{
CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
@@ -9772,6 +9784,26 @@ PublicationObjSpec:
$$->pubobjtype = PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA;
$$->location = @5;
}
+ | SEQUENCE relation_expr
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+ $$->pubtable = makeNode(PublicationTable);
+ $$->pubtable->relation = $2;
+ }
+ | ALL SEQUENCES IN_P SCHEMA ColId
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA;
+ $$->name = $5;
+ $$->location = @5;
+ }
+ | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA;
+ $$->location = @5;
+ }
| ColId OptWhereClause
{
$$ = makeNode(PublicationObjSpec);
@@ -9839,7 +9871,9 @@ pub_obj_list: PublicationObjSpec
* pub_obj is one of:
*
* TABLE table_name [, ...]
+ * SEQUENCE table_name [, ...]
* ALL TABLES IN SCHEMA schema_name [, ...]
+ * ALL SEQUENCES IN SCHEMA schema_name [, ...]
*
*****************************************************************************/
@@ -10124,6 +10158,12 @@ UnlistenStmt:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index c9b0eeefd7e..3dbe85d61a2 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -648,6 +648,56 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1203,6 +1253,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 1659964571c..315697069dc 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -359,6 +360,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -999,6 +1006,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ List *qual = NIL;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel, &qual);
+
+ /* sequences don't have row filters */
+ Assert(!qual);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1260,10 +1360,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 5d9acc61733..f4d45d83152 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -144,6 +144,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -1093,6 +1094,57 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by ResetSequence2 */
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2421,6 +2473,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index ea57a0477f0..60bbc524b19 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -53,6 +53,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation relation, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -208,6 +212,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -224,6 +229,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -237,6 +243,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -244,6 +251,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -312,6 +320,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -1440,6 +1458,51 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation relation, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(relation))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, relation);
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ relation,
+ xid,
+ sequence_lsn,
+ transactional,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1725,7 +1788,8 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->schema_sent = false;
entry->streamed_txns = NIL;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->new_slot = NULL;
entry->old_slot = NULL;
memset(entry->exprstate, 0, sizeof(entry->exprstate));
@@ -1751,6 +1815,7 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
bool am_partition = get_rel_relispartition(relid);
char relkind = get_rel_relkind(relid);
List *rel_publications = NIL;
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1779,6 +1844,7 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
/*
* Tuple slots cleanups. (Will be rebuilt later if needed).
@@ -1815,12 +1881,23 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1866,6 +1943,7 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
rel_publications = lappend(rel_publications, pub);
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index fccffce5729..c29d822313d 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5587,7 +5587,15 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
GetSchemaPublications(schemaid));
}
}
- puboids = list_concat_unique_oid(puboids, GetAllTablesPublications());
+
+ /*
+ * Consider also FOR ALL TABLES and FOR ALL SEQUENCES publications,
+ * depending on the relkind of the relation.
+ */
+ if (relation->rd_rel->relkind == RELKIND_SEQUENCE)
+ puboids = list_concat_unique_oid(puboids, GetAllSequencesPublications());
+ else
+ puboids = list_concat_unique_oid(puboids, GetAllTablesPublications());
foreach(lc, puboids)
{
@@ -5606,6 +5614,7 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
pubdesc->pubactions.pubupdate |= pubform->pubupdate;
pubdesc->pubactions.pubdelete |= pubform->pubdelete;
pubdesc->pubactions.pubtruncate |= pubform->pubtruncate;
+ pubdesc->pubactions.pubsequence |= pubform->pubsequence;
/*
* Check if all columns referenced in the filter expression are part of
@@ -5631,6 +5640,8 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
* If we know everything is replicated and the row filter is invalid
* for update and delete, there is no point to check for other
* publications.
+ *
+ * XXX We ignore sequences here, because those don't use row filters.
*/
if (pubdesc->pubactions.pubinsert && pubdesc->pubactions.pubupdate &&
pubdesc->pubactions.pubdelete && pubdesc->pubactions.pubtruncate &&
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index f4e7819f1e2..763aac81596 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -630,12 +630,12 @@ static const struct cachedesc cacheinfo[] = {
64
},
{PublicationNamespaceRelationId, /* PUBLICATIONNAMESPACEMAP */
- PublicationNamespacePnnspidPnpubidIndexId,
- 2,
+ PublicationNamespacePnnspidPnpubidSeqIndexId,
+ 3,
{
Anum_pg_publication_namespace_pnnspid,
Anum_pg_publication_namespace_pnpubid,
- 0,
+ Anum_pg_publication_namespace_pnsequences,
0
},
64
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e69dcf8a484..ef8c6e43c61 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -3788,10 +3788,12 @@ getPublications(Archive *fout, int *numPublications)
int i_pubname;
int i_pubowner;
int i_puballtables;
+ int i_puballsequences;
int i_pubinsert;
int i_pubupdate;
int i_pubdelete;
int i_pubtruncate;
+ int i_pubsequence;
int i_pubviaroot;
int i,
ntups;
@@ -3807,23 +3809,29 @@ getPublications(Archive *fout, int *numPublications)
resetPQExpBuffer(query);
/* Get the publications. */
- if (fout->remoteVersion >= 130000)
+ if (fout->remoteVersion >= 150000)
+ appendPQExpBuffer(query,
+ "SELECT p.tableoid, p.oid, p.pubname, "
+ "p.pubowner, "
+ "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubsequence, p.pubviaroot "
+ "FROM pg_publication p");
+ else if (fout->remoteVersion >= 130000)
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+ "p.puballtables, false AS p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS p.pubsequence, p.pubviaroot "
"FROM pg_publication p");
else if (fout->remoteVersion >= 110000)
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+ "p.puballtables, false AS p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS p.pubsequence, false AS pubviaroot "
"FROM pg_publication p");
else
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+ "p.puballtables, false AS p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS p.pubsequence, false AS pubviaroot "
"FROM pg_publication p");
res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -3835,10 +3843,12 @@ getPublications(Archive *fout, int *numPublications)
i_pubname = PQfnumber(res, "pubname");
i_pubowner = PQfnumber(res, "pubowner");
i_puballtables = PQfnumber(res, "puballtables");
+ i_puballsequences = PQfnumber(res, "puballsequences");
i_pubinsert = PQfnumber(res, "pubinsert");
i_pubupdate = PQfnumber(res, "pubupdate");
i_pubdelete = PQfnumber(res, "pubdelete");
i_pubtruncate = PQfnumber(res, "pubtruncate");
+ i_pubsequence = PQfnumber(res, "pubsequence");
i_pubviaroot = PQfnumber(res, "pubviaroot");
pubinfo = pg_malloc(ntups * sizeof(PublicationInfo));
@@ -3854,6 +3864,8 @@ getPublications(Archive *fout, int *numPublications)
pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
pubinfo[i].puballtables =
(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+ pubinfo[i].puballsequences =
+ (strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
pubinfo[i].pubinsert =
(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
pubinfo[i].pubupdate =
@@ -3862,6 +3874,8 @@ getPublications(Archive *fout, int *numPublications)
(strcmp(PQgetvalue(res, i, i_pubdelete), "t") == 0);
pubinfo[i].pubtruncate =
(strcmp(PQgetvalue(res, i, i_pubtruncate), "t") == 0);
+ pubinfo[i].pubsequence =
+ (strcmp(PQgetvalue(res, i, i_pubsequence), "t") == 0);
pubinfo[i].pubviaroot =
(strcmp(PQgetvalue(res, i, i_pubviaroot), "t") == 0);
@@ -3907,6 +3921,9 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
if (pubinfo->puballtables)
appendPQExpBufferStr(query, " FOR ALL TABLES");
+ if (pubinfo->puballsequences)
+ appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
+
appendPQExpBufferStr(query, " WITH (publish = '");
if (pubinfo->pubinsert)
{
@@ -3941,6 +3958,15 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
first = false;
}
+ if (pubinfo->pubsequence)
+ {
+ if (!first)
+ appendPQExpBufferStr(query, ", ");
+
+ appendPQExpBufferStr(query, "sequence");
+ first = false;
+ }
+
appendPQExpBufferStr(query, "'");
if (pubinfo->pubviaroot)
@@ -3987,6 +4013,7 @@ getPublicationNamespaces(Archive *fout)
int i_oid;
int i_pnpubid;
int i_pnnspid;
+ int i_pnsequences;
int i,
j,
ntups;
@@ -3998,7 +4025,7 @@ getPublicationNamespaces(Archive *fout)
/* Collect all publication membership info. */
appendPQExpBufferStr(query,
- "SELECT tableoid, oid, pnpubid, pnnspid "
+ "SELECT tableoid, oid, pnpubid, pnnspid, pnsequences "
"FROM pg_catalog.pg_publication_namespace");
res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4008,6 +4035,7 @@ getPublicationNamespaces(Archive *fout)
i_oid = PQfnumber(res, "oid");
i_pnpubid = PQfnumber(res, "pnpubid");
i_pnnspid = PQfnumber(res, "pnnspid");
+ i_pnsequences = PQfnumber(res, "pnsequences");
/* this allocation may be more than we need */
pubsinfo = pg_malloc(ntups * sizeof(PublicationSchemaInfo));
@@ -4017,6 +4045,7 @@ getPublicationNamespaces(Archive *fout)
{
Oid pnpubid = atooid(PQgetvalue(res, i, i_pnpubid));
Oid pnnspid = atooid(PQgetvalue(res, i, i_pnnspid));
+ bool pnsequences = (strcmp(PQgetvalue(res, i, i_pnsequences), "t") == 0);
PublicationInfo *pubinfo;
NamespaceInfo *nspinfo;
@@ -4048,6 +4077,7 @@ getPublicationNamespaces(Archive *fout)
pubsinfo[j].dobj.name = nspinfo->dobj.name;
pubsinfo[j].publication = pubinfo;
pubsinfo[j].pubschema = nspinfo;
+ pubsinfo[j].pubsequences = pnsequences;
/* Decide whether we want to dump it */
selectDumpablePublicationObject(&(pubsinfo[j].dobj), fout);
@@ -4181,7 +4211,11 @@ dumpPublicationNamespace(Archive *fout, const PublicationSchemaInfo *pubsinfo)
query = createPQExpBuffer();
appendPQExpBuffer(query, "ALTER PUBLICATION %s ", fmtId(pubinfo->dobj.name));
- appendPQExpBuffer(query, "ADD ALL TABLES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
+
+ if (pubsinfo->pubsequences)
+ appendPQExpBuffer(query, "ADD ALL SEQUENCES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
+ else
+ appendPQExpBuffer(query, "ADD ALL TABLES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
/*
* There is no point in creating drop query as the drop is done by schema
@@ -4214,6 +4248,7 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
TableInfo *tbinfo = pubrinfo->pubtable;
PQExpBuffer query;
char *tag;
+ char *description;
/* Do nothing in data-only dump */
if (dopt->dataOnly)
@@ -4223,10 +4258,22 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
query = createPQExpBuffer();
- appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD TABLE ONLY",
- fmtId(pubinfo->dobj.name));
+ if (tbinfo->relkind == RELKIND_SEQUENCE)
+ {
+ appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD SEQUENCE",
+ fmtId(pubinfo->dobj.name));
+ description = "PUBLICATION SEQUENCE";
+ }
+ else
+ {
+ appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD TABLE ONLY",
+ fmtId(pubinfo->dobj.name));
+ description = "PUBLICATION TABLE";
+ }
+
appendPQExpBuffer(query, " %s",
fmtQualifiedDumpable(tbinfo));
+
if (pubrinfo->pubrelqual)
{
/*
@@ -4249,7 +4296,7 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
ARCHIVE_OPTS(.tag = tag,
.namespace = tbinfo->dobj.namespace->dobj.name,
.owner = pubinfo->rolname,
- .description = "PUBLICATION TABLE",
+ .description = description,
.section = SECTION_POST_DATA,
.createStmt = query->data));
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 997a3b60719..8739af69122 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -615,10 +615,12 @@ typedef struct _PublicationInfo
DumpableObject dobj;
const char *rolname;
bool puballtables;
+ bool puballsequences;
bool pubinsert;
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
bool pubviaroot;
} PublicationInfo;
@@ -643,6 +645,7 @@ typedef struct _PublicationSchemaInfo
DumpableObject dobj;
PublicationInfo *publication;
NamespaceInfo *pubschema;
+ bool pubsequences;
} PublicationSchemaInfo;
/*
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index dd065c758fa..380c929f815 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2357,7 +2357,7 @@ my %tests = (
create_order => 50,
create_sql => 'CREATE PUBLICATION pub1;',
regexp => qr/^
- \QCREATE PUBLICATION pub1 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub1 WITH (publish = 'insert, update, delete, truncate, sequence');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
@@ -2377,7 +2377,18 @@ my %tests = (
create_order => 50,
create_sql => 'CREATE PUBLICATION pub3;',
regexp => qr/^
- \QCREATE PUBLICATION pub3 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub3 WITH (publish = 'insert, update, delete, truncate, sequence');\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ },
+
+ 'CREATE PUBLICATION pub4' => {
+ create_order => 50,
+ create_sql => 'CREATE PUBLICATION pub4
+ FOR ALL SEQUENCES
+ WITH (publish = \'\');',
+ regexp => qr/^
+ \QCREATE PUBLICATION pub4 FOR ALL SEQUENCES WITH (publish = '');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
@@ -2439,6 +2450,27 @@ my %tests = (
like => { %full_runs, section_post_data => 1, },
},
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test' => {
+ create_order => 51,
+ create_sql =>
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test;',
+ regexp => qr/^
+ \QALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test;\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
+ },
+
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public' => {
+ create_order => 52,
+ create_sql =>
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public;',
+ regexp => qr/^
+ \QALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public;\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ },
+
'CREATE SCHEMA public' => {
regexp => qr/^CREATE SCHEMA public;/m,
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index e3382933d98..c0df70f0f9a 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1621,28 +1621,19 @@ describeOneTableDetails(const char *schemaname,
if (tableinfo.relkind == RELKIND_SEQUENCE)
{
PGresult *result = NULL;
- printQueryOpt myopt = pset.popt;
- char *footers[2] = {NULL, NULL};
if (pset.sversion >= 100000)
{
printfPQExpBuffer(&buf,
- "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
- " seqstart AS \"%s\",\n"
- " seqmin AS \"%s\",\n"
- " seqmax AS \"%s\",\n"
- " seqincrement AS \"%s\",\n"
- " CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
- " seqcache AS \"%s\"\n",
- gettext_noop("Type"),
- gettext_noop("Start"),
- gettext_noop("Minimum"),
- gettext_noop("Maximum"),
- gettext_noop("Increment"),
+ "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+ " seqstart,\n"
+ " seqmin,\n"
+ " seqmax,\n"
+ " seqincrement,\n"
+ " CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+ " seqcache\n",
gettext_noop("yes"),
- gettext_noop("no"),
- gettext_noop("Cycles?"),
- gettext_noop("Cache"));
+ gettext_noop("no"));
appendPQExpBuffer(&buf,
"FROM pg_catalog.pg_sequence\n"
"WHERE seqrelid = '%s';",
@@ -1651,22 +1642,15 @@ describeOneTableDetails(const char *schemaname,
else
{
printfPQExpBuffer(&buf,
- "SELECT 'bigint' AS \"%s\",\n"
- " start_value AS \"%s\",\n"
- " min_value AS \"%s\",\n"
- " max_value AS \"%s\",\n"
- " increment_by AS \"%s\",\n"
- " CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
- " cache_value AS \"%s\"\n",
- gettext_noop("Type"),
- gettext_noop("Start"),
- gettext_noop("Minimum"),
- gettext_noop("Maximum"),
- gettext_noop("Increment"),
+ "SELECT 'bigint',\n"
+ " start_value,\n"
+ " min_value,\n"
+ " max_value,\n"
+ " increment_by,\n"
+ " CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+ " cache_value\n",
gettext_noop("yes"),
- gettext_noop("no"),
- gettext_noop("Cycles?"),
- gettext_noop("Cache"));
+ gettext_noop("no"));
appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
/* must be separate because fmtId isn't reentrant */
appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1676,6 +1660,51 @@ describeOneTableDetails(const char *schemaname,
if (!res)
goto error_return;
+ numrows = PQntuples(res);
+
+ /* XXX reset to use expanded output for sequences (maybe we should
+ * keep this disabled, just like for tables?) */
+ myopt.expanded = pset.popt.topt.expanded;
+
+ printTableInit(&cont, &myopt, title.data, 7, numrows);
+ printTableInitialized = true;
+
+ printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+ schemaname, relationname);
+
+ printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+ printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+ printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+ /* Generate table cells to be printed */
+ for (i = 0; i < numrows; i++)
+ {
+ /* Type */
+ printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+ /* Start */
+ printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+ /* Minimum */
+ printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+ /* Maximum */
+ printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+ /* Increment */
+ printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+ /* Cycles? */
+ printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+ /* Cache */
+ printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+ }
+
/* Footer information about a sequence */
/* Get the column that owns this sequence */
@@ -1709,29 +1738,63 @@ describeOneTableDetails(const char *schemaname,
switch (PQgetvalue(result, 0, 1)[0])
{
case 'a':
- footers[0] = psprintf(_("Owned by: %s"),
- PQgetvalue(result, 0, 0));
+ printTableAddFooter(&cont,
+ psprintf(_("Owned by: %s"),
+ PQgetvalue(result, 0, 0)));
break;
case 'i':
- footers[0] = psprintf(_("Sequence for identity column: %s"),
- PQgetvalue(result, 0, 0));
+ printTableAddFooter(&cont,
+ psprintf(_("Sequence for identity column: %s"),
+ PQgetvalue(result, 0, 0)));
break;
}
}
PQclear(result);
- printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
- schemaname, relationname);
+ /* print any publications */
+ if (pset.sversion >= 150000)
+ {
+ int tuples = 0;
- myopt.footers = footers;
- myopt.topt.default_footer = false;
- myopt.title = title.data;
- myopt.translate_header = true;
+ printfPQExpBuffer(&buf,
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ " JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
+ " JOIN pg_catalog.pg_class pc ON pc.relnamespace = pn.pnnspid\n"
+ "WHERE pc.oid ='%s' and pn.pnsequences and pg_catalog.pg_relation_is_publishable('%s')\n"
+ "UNION\n"
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ " JOIN pg_catalog.pg_publication_rel pr ON p.oid = pr.prpubid\n"
+ "WHERE pr.prrelid = '%s'\n"
+ "UNION\n"
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+ "ORDER BY 1;",
+ oid, oid, oid, oid);
- printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+ result = PSQLexec(buf.data);
+ if (!result)
+ goto error_return;
+ else
+ tuples = PQntuples(result);
+
+ if (tuples > 0)
+ printTableAddFooter(&cont, _("Publications:"));
+
+ /* Might be an empty set - that's ok */
+ for (i = 0; i < tuples; i++)
+ {
+ printfPQExpBuffer(&buf, " \"%s\"",
+ PQgetvalue(result, i, 0));
+
+ printTableAddFooter(&cont, buf.data);
+ }
+ PQclear(result);
+ }
- if (footers[0])
- free(footers[0]);
+ printTable(&cont, pset.queryFout, false, pset.logfile);
retval = true;
goto error_return; /* not an error, just return early */
@@ -1958,6 +2021,11 @@ describeOneTableDetails(const char *schemaname,
for (i = 0; i < cols; i++)
printTableAddHeader(&cont, headers[i], true, 'l');
+ res = PSQLexec(buf.data);
+ if (!res)
+ goto error_return;
+ numrows = PQntuples(res);
+
/* Generate table cells to be printed */
for (i = 0; i < numrows; i++)
{
@@ -2883,7 +2951,7 @@ describeOneTableDetails(const char *schemaname,
"FROM pg_catalog.pg_publication p\n"
" JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
" JOIN pg_catalog.pg_class pc ON pc.relnamespace = pn.pnnspid\n"
- "WHERE pc.oid ='%s' and pg_catalog.pg_relation_is_publishable('%s')\n"
+ "WHERE pc.oid ='%s' and (not pn.pnsequences) and pg_catalog.pg_relation_is_publishable('%s')\n"
"UNION\n"
"SELECT pubname\n"
" , pg_get_expr(pr.prqual, c.oid)\n"
@@ -4764,7 +4832,7 @@ listSchemas(const char *pattern, bool verbose, bool showSystem)
int i;
printfPQExpBuffer(&buf,
- "SELECT pubname \n"
+ "SELECT pubname, (CASE WHEN pnsequences THEN 'sequences' ELSE 'tables' END) AS pubtype\n"
"FROM pg_catalog.pg_publication p\n"
" JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
" JOIN pg_catalog.pg_namespace n ON n.oid = pn.pnnspid \n"
@@ -4793,8 +4861,9 @@ listSchemas(const char *pattern, bool verbose, bool showSystem)
/* Might be an empty set - that's ok */
for (i = 0; i < pub_schema_tuples; i++)
{
- printfPQExpBuffer(&buf, " \"%s\"",
- PQgetvalue(result, i, 0));
+ printfPQExpBuffer(&buf, " \"%s\" (%s)",
+ PQgetvalue(result, i, 0),
+ PQgetvalue(result, i, 1));
footers[i + 1] = pg_strdup(buf.data);
}
@@ -5799,7 +5868,7 @@ listPublications(const char *pattern)
PQExpBufferData buf;
PGresult *res;
printQueryOpt myopt = pset.popt;
- static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+ static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
if (pset.sversion < 100000)
{
@@ -5813,23 +5882,45 @@ listPublications(const char *pattern)
initPQExpBuffer(&buf);
- printfPQExpBuffer(&buf,
- "SELECT pubname AS \"%s\",\n"
- " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
- " puballtables AS \"%s\",\n"
- " pubinsert AS \"%s\",\n"
- " pubupdate AS \"%s\",\n"
- " pubdelete AS \"%s\"",
- gettext_noop("Name"),
- gettext_noop("Owner"),
- gettext_noop("All tables"),
- gettext_noop("Inserts"),
- gettext_noop("Updates"),
- gettext_noop("Deletes"));
+ if (pset.sversion >= 150000)
+ printfPQExpBuffer(&buf,
+ "SELECT pubname AS \"%s\",\n"
+ " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+ " puballtables AS \"%s\",\n"
+ " puballsequences AS \"%s\",\n"
+ " pubinsert AS \"%s\",\n"
+ " pubupdate AS \"%s\",\n"
+ " pubdelete AS \"%s\"",
+ gettext_noop("Name"),
+ gettext_noop("Owner"),
+ gettext_noop("All tables"),
+ gettext_noop("All sequences"),
+ gettext_noop("Inserts"),
+ gettext_noop("Updates"),
+ gettext_noop("Deletes"));
+ else
+ printfPQExpBuffer(&buf,
+ "SELECT pubname AS \"%s\",\n"
+ " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+ " puballtables AS \"%s\",\n"
+ " pubinsert AS \"%s\",\n"
+ " pubupdate AS \"%s\",\n"
+ " pubdelete AS \"%s\"",
+ gettext_noop("Name"),
+ gettext_noop("Owner"),
+ gettext_noop("All tables"),
+ gettext_noop("Inserts"),
+ gettext_noop("Updates"),
+ gettext_noop("Deletes"));
+
if (pset.sversion >= 110000)
appendPQExpBuffer(&buf,
",\n pubtruncate AS \"%s\"",
gettext_noop("Truncates"));
+ if (pset.sversion >= 150000)
+ appendPQExpBuffer(&buf,
+ ",\n pubsequence AS \"%s\"",
+ gettext_noop("Sequences"));
if (pset.sversion >= 130000)
appendPQExpBuffer(&buf,
",\n pubviaroot AS \"%s\"",
@@ -5915,6 +6006,7 @@ describePublications(const char *pattern)
PGresult *res;
bool has_pubtruncate;
bool has_pubviaroot;
+ bool has_pubsequence;
PQExpBufferData title;
printTableContent cont;
@@ -5931,13 +6023,21 @@ describePublications(const char *pattern)
has_pubtruncate = (pset.sversion >= 110000);
has_pubviaroot = (pset.sversion >= 130000);
+ has_pubsequence = (pset.sversion >= 150000);
initPQExpBuffer(&buf);
- printfPQExpBuffer(&buf,
- "SELECT oid, pubname,\n"
- " pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
- " puballtables, pubinsert, pubupdate, pubdelete");
+ if (pset.sversion >= 150000)
+ printfPQExpBuffer(&buf,
+ "SELECT oid, pubname,\n"
+ " pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
+ " puballtables, puballsequences, pubinsert, pubupdate, pubdelete, pubsequence");
+ else
+ printfPQExpBuffer(&buf,
+ "SELECT oid, pubname,\n"
+ " pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
+ " puballtables, false as puballsequences, pubinsert, pubupdate, pubdelete, false as pubsequence");
+
if (has_pubtruncate)
appendPQExpBufferStr(&buf,
", pubtruncate");
@@ -5984,12 +6084,15 @@ describePublications(const char *pattern)
char *pubid = PQgetvalue(res, i, 0);
char *pubname = PQgetvalue(res, i, 1);
bool puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
+ bool puballsequences = strcmp(PQgetvalue(res, i, 4), "t") == 0;
printTableOpt myopt = pset.popt.topt;
if (has_pubtruncate)
ncols++;
if (has_pubviaroot)
ncols++;
+ if (has_pubsequence)
+ ncols += 2; /* adds "truncate" action and "all sequences" */
initPQExpBuffer(&title);
printfPQExpBuffer(&title, _("Publication %s"), pubname);
@@ -5997,23 +6100,30 @@ describePublications(const char *pattern)
printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+ if (has_pubsequence)
+ printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
+ if (has_pubsequence)
+ printTableAddHeader(&cont, gettext_noop("Sequences"), true, align);
if (has_pubtruncate)
printTableAddHeader(&cont, gettext_noop("Truncates"), true, align);
if (has_pubviaroot)
printTableAddHeader(&cont, gettext_noop("Via root"), true, align);
- printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false); /* owner */
+ printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false); /* all tables */
+ printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false); /* all sequences */
+ printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false); /* insert */
+ printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false); /* update */
+ printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false); /* delete */
+ if (has_pubsequence)
+ printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false); /* sequence */
if (has_pubtruncate)
- printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false); /* truncate */
if (has_pubviaroot)
- printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false); /* via root */
if (!puballtables)
{
@@ -6033,6 +6143,7 @@ describePublications(const char *pattern)
"WHERE c.relnamespace = n.oid\n"
" AND c.oid = pr.prrelid\n"
" AND pr.prpubid = '%s'\n"
+ " AND c.relkind != 'S'\n" /* exclude sequences */
"ORDER BY 1,2", pubid);
if (!addFooterToPublicationDesc(&buf, "Tables:", false, &cont))
goto error_return;
@@ -6044,7 +6155,7 @@ describePublications(const char *pattern)
"SELECT n.nspname\n"
"FROM pg_catalog.pg_namespace n\n"
" JOIN pg_catalog.pg_publication_namespace pn ON n.oid = pn.pnnspid\n"
- "WHERE pn.pnpubid = '%s'\n"
+ "WHERE pn.pnpubid = '%s' AND NOT pn.pnsequences\n"
"ORDER BY 1", pubid);
if (!addFooterToPublicationDesc(&buf, "Tables from schemas:",
true, &cont))
@@ -6052,6 +6163,37 @@ describePublications(const char *pattern)
}
}
+ if (!puballsequences)
+ {
+ /* Get the tables for the specified publication */
+ printfPQExpBuffer(&buf,
+ "SELECT n.nspname, c.relname, NULL\n"
+ "FROM pg_catalog.pg_class c,\n"
+ " pg_catalog.pg_namespace n,\n"
+ " pg_catalog.pg_publication_rel pr\n"
+ "WHERE c.relnamespace = n.oid\n"
+ " AND c.oid = pr.prrelid\n"
+ " AND pr.prpubid = '%s'\n"
+ " AND c.relkind = 'S'\n" /* only sequences */
+ "ORDER BY 1,2", pubid);
+ if (!addFooterToPublicationDesc(&buf, "Sequences:", false, &cont))
+ goto error_return;
+
+ if (pset.sversion >= 150000)
+ {
+ /* Get the schemas for the specified publication */
+ printfPQExpBuffer(&buf,
+ "SELECT n.nspname\n"
+ "FROM pg_catalog.pg_namespace n\n"
+ " JOIN pg_catalog.pg_publication_namespace pn ON n.oid = pn.pnnspid\n"
+ "WHERE pn.pnpubid = '%s' AND pn.pnsequences\n"
+ "ORDER BY 1", pubid);
+ if (!addFooterToPublicationDesc(&buf, "Sequences from schemas:",
+ true, &cont))
+ goto error_return;
+ }
+ }
+
printTable(&cont, pset.queryFout, false, pset.logfile);
printTableCleanup(&cont);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 6957567264a..a8cdfc4cfba 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1782,11 +1782,15 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") ||
(HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") &&
ends_with(prev_wd, ',')))
COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "SEQUENCE") ||
+ (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "SEQUENCE") &&
+ ends_with(prev_wd, ',')))
+ COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_sequences);
/*
* "ALTER PUBLICATION <name> SET TABLE <name> WHERE (" - complete with
* table attributes
@@ -1805,11 +1809,11 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH(",");
/* ALTER PUBLICATION <name> DROP */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
- COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE");
- else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
+ COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES|SEQUENCES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY_PLUS(Query_for_list_of_schemas
" AND nspname NOT LIKE E'pg\\\\_%'",
"CURRENT_SCHEMA");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 7f1ee97f55c..29b84af04b6 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11546,6 +11546,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index ba72e62e614..f2b4e838c77 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct PublicationDesc
@@ -92,6 +102,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -122,26 +133,30 @@ typedef enum PublicationPartOpt
PUBLICATION_PART_ALL,
} PublicationPartOpt;
-extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
+extern List *GetPublicationRelations(Oid pubid, bool sequences,
+ PublicationPartOpt pub_partopt);
extern List *GetAllTablesPublications(void);
extern List *GetAllTablesPublicationRelations(bool pubviaroot);
-extern List *GetPublicationSchemas(Oid pubid);
+extern List *GetPublicationSchemas(Oid pubid, bool sequence);
extern List *GetSchemaPublications(Oid schemaid);
-extern List *GetSchemaPublicationRelations(Oid schemaid,
+extern List *GetSchemaPublicationRelations(Oid schemaid, bool sequences,
PublicationPartOpt pub_partopt);
-extern List *GetAllSchemaPublicationRelations(Oid puboid,
+extern List *GetAllSchemaPublicationRelations(Oid puboid, bool sequences,
PublicationPartOpt pub_partopt);
extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
Oid relid);
extern Oid GetTopMostAncestorInPublication(Oid puboid, List *ancestors);
+extern List *GetAllSequencesPublications(void);
+extern List *GetAllSequencesPublicationRelations(void);
+
extern bool is_publishable_relation(Relation rel);
extern bool is_schema_publication(Oid pubid);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *pri,
bool if_not_exists);
extern ObjectAddress publication_add_schema(Oid pubid, Oid schemaid,
- bool if_not_exists);
+ bool sequences, bool if_not_exists);
extern Oid get_publication_oid(const char *pubname, bool missing_ok);
extern char *get_publication_name(Oid pubid, bool missing_ok);
diff --git a/src/include/catalog/pg_publication_namespace.h b/src/include/catalog/pg_publication_namespace.h
index e4306da02e7..bbc22dec9f8 100644
--- a/src/include/catalog/pg_publication_namespace.h
+++ b/src/include/catalog/pg_publication_namespace.h
@@ -32,6 +32,7 @@ CATALOG(pg_publication_namespace,8901,PublicationNamespaceRelationId)
Oid oid; /* oid */
Oid pnpubid BKI_LOOKUP(pg_publication); /* Oid of the publication */
Oid pnnspid BKI_LOOKUP(pg_namespace); /* Oid of the schema */
+ bool pnsequences; /* tables or sequences from the schema? */
} FormData_pg_publication_namespace;
/* ----------------
@@ -42,6 +43,6 @@ CATALOG(pg_publication_namespace,8901,PublicationNamespaceRelationId)
typedef FormData_pg_publication_namespace *Form_pg_publication_namespace;
DECLARE_UNIQUE_INDEX_PKEY(pg_publication_namespace_oid_index, 8902, PublicationNamespaceObjectIndexId, on pg_publication_namespace using btree(oid oid_ops));
-DECLARE_UNIQUE_INDEX(pg_publication_namespace_pnnspid_pnpubid_index, 8903, PublicationNamespacePnnspidPnpubidIndexId, on pg_publication_namespace using btree(pnnspid oid_ops, pnpubid oid_ops));
+DECLARE_UNIQUE_INDEX(pg_publication_namespace_pnnspid_pnpubid_pnseq_index, 8903, PublicationNamespacePnnspidPnpubidSeqIndexId, on pg_publication_namespace using btree(pnnspid oid_ops, pnpubid oid_ops, pnsequences bool_ops));
#endif /* PG_PUBLICATION_NAMESPACE_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9fecc41954e..d8c255a7af5 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 1617702d9d6..7adc3710408 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3663,6 +3663,10 @@ typedef enum PublicationObjSpecType
PUBLICATIONOBJ_TABLES_IN_SCHEMA, /* All tables in schema */
PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA, /* All tables in first element of
* search_path */
+ PUBLICATIONOBJ_SEQUENCE, /* Sequence type */
+ PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA, /* Sequences in schema type */
+ PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA, /* Get the first element of
+ * search_path */
PUBLICATIONOBJ_CONTINUATION /* Continuation of previous type */
} PublicationObjSpecType;
@@ -3682,6 +3686,7 @@ typedef struct CreatePublicationStmt
List *options; /* List of DefElem nodes */
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
} CreatePublicationStmt;
typedef enum AlterPublicationAction
@@ -3705,6 +3710,7 @@ typedef struct AlterPublicationStmt
*/
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
AlterPublicationAction action; /* What action to perform with the given
* objects */
} AlterPublicationStmt;
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 4d2c881644a..415757d8a2d 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -61,6 +61,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -118,6 +119,18 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -230,6 +243,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index eafedd610a5..f4e9f35d09d 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -29,6 +29,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index 45dddaf5562..6e87d6f5c36 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -3725,7 +3725,10 @@ char *
PQgetvalue(const PGresult *res, int tup_num, int field_num)
{
if (!check_tuple_field_number(res, tup_num, field_num))
+ {
+ printf("getvalue\n");
return NULL;
+ }
return res->tuples[tup_num][field_num].value;
}
@@ -3736,7 +3739,10 @@ int
PQgetlength(const PGresult *res, int tup_num, int field_num)
{
if (!check_tuple_field_number(res, tup_num, field_num))
+ {
+ printf("getlength\n");
return 0;
+ }
if (res->tuples[tup_num][field_num].len != NULL_LEN)
return res->tuples[tup_num][field_num].len;
else
@@ -3750,7 +3756,10 @@ int
PQgetisnull(const PGresult *res, int tup_num, int field_num)
{
if (!check_tuple_field_number(res, tup_num, field_num))
+ {
+ printf("getisnull\n");
return 1; /* pretend it is null */
+ }
if (res->tuples[tup_num][field_num].len == NULL_LEN)
return 1;
else
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 3c382e520e4..bc8ba71925e 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR: conflicting or redundant options
LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
^
\dRp
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f | t | f | f | f | f
- testpub_default | regress_publication_user | f | f | t | f | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f | f | t | f | f | f | f | f
+ testpub_default | regress_publication_user | f | f | f | t | f | f | f | f
(2 rows)
-ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
+ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete, sequence');
\dRp
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f | t | f | f | f | f
- testpub_default | regress_publication_user | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f | f | t | f | f | f | f | f
+ testpub_default | regress_publication_user | f | f | t | t | t | f | t | f
(2 rows)
--- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
-- should be able to add schema to 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable ADD ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl1"
Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
-- should be able to drop schema from 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable DROP ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl1"
-- should be able to set schema to 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable SET ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test"
@@ -134,10 +134,10 @@ ERROR: relation "testpub_nopk" is not part of the publication
-- should be able to set table to schema publication
ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
\dRp+ testpub_forschema
- Publication testpub_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test.testpub_nopk"
@@ -159,10 +159,10 @@ Publications:
"testpub_foralltables"
\dRp+ testpub_foralltables
- Publication testpub_foralltables
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t | t | t | f | f | f
+ Publication testpub_foralltables
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | t | f | t | t | f | f | f | f
(1 row)
DROP TABLE testpub_tbl2;
@@ -174,24 +174,520 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
RESET client_min_messages;
\dRp+ testpub3
- Publication testpub3
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl3"
"public.testpub_tbl3a"
\dRp+ testpub4
- Publication testpub4
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub4
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl3"
DROP TABLE testpub_tbl3, testpub_tbl3a;
DROP PUBLICATION testpub3, testpub4;
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES WITH (publish = 'sequence');
+RESET client_min_messages;
+ALTER PUBLICATION testpub_forallsequences SET (publish = 'insert, sequence');
+CREATE SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences ADD SEQUENCE testpub_seq2;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't drop from all sequences publication
+ALTER PUBLICATION testpub_forallsequences DROP SEQUENCE testpub_seq2;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences SET SEQUENCE pub_test.testpub_seq1;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't add schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences ADD ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+-- fail - can't drop schema from 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences DROP ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+-- fail - can't set schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences SET ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forsequence FOR SEQUENCE testpub_seq0;
+RESET client_min_messages;
+-- should be able to add schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence ADD ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "public.testpub_seq0"
+Sequences from schemas:
+ "pub_test"
+
+-- should be able to drop schema from 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "public.testpub_seq0"
+
+-- should be able to set schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence SET ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences from schemas:
+ "pub_test"
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forschema FOR ALL SEQUENCES IN SCHEMA pub_test;
+RESET client_min_messages;
+-- fail - can't create publication with schema and table of the same schema
+CREATE PUBLICATION testpub_for_seq_schema FOR ALL SEQUENCES IN SCHEMA pub_test, SEQUENCE pub_test.testpub_seq1;
+ERROR: cannot add relation "pub_test.testpub_seq1" to publication
+DETAIL: Sequence's schema "pub_test" is already part of the publication or part of the specified schema list.
+-- fail - can't add a sequence of the same schema to the schema publication
+ALTER PUBLICATION testpub_forschema ADD SEQUENCE pub_test.testpub_seq1;
+ERROR: cannot add relation "pub_test.testpub_seq1" to publication
+DETAIL: Sequence's schema "pub_test" is already part of the publication or part of the specified schema list.
+-- fail - can't drop a sequence from the schema publication which isn't in the
+-- publication
+ALTER PUBLICATION testpub_forschema DROP SEQUENCE pub_test.testpub_seq1;
+ERROR: relation "testpub_seq1" is not part of the publication
+-- should be able to set sequence to schema publication
+ALTER PUBLICATION testpub_forschema SET SEQUENCE pub_test.testpub_seq1;
+\dRp+ testpub_forschema
+ Publication testpub_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "pub_test.testpub_seq1"
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+ pubname | puballtables | puballsequences
+-------------------------+--------------+-----------------
+ testpub_forallsequences | f | t
+(1 row)
+
+\d+ pub_test.testpub_seq1
+ Sequence "pub_test.testpub_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_forallsequences"
+ "testpub_forschema"
+ "testpub_forsequence"
+
+\dRp+ testpub_forallsequences
+ Publication testpub_forallsequences
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | t | t | f | f | t | f | f
+(1 row)
+
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1, testpub_seq2;
+DROP PUBLICATION testpub_forallsequences, testpub_forsequence, testpub_forschema;
+-- Publication mixing tables and sequences
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_mix;
+RESET client_min_messages;
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE pub_test.testpub_seq2;
+ALTER PUBLICATION testpub_mix ADD SEQUENCE testpub_seq1, TABLE testpub_tbl1;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Sequences:
+ "public.testpub_seq1"
+
+ALTER PUBLICATION testpub_mix ADD ALL SEQUENCES IN SCHEMA pub_test, ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Tables from schemas:
+ "pub_test"
+Sequences:
+ "public.testpub_seq1"
+Sequences from schemas:
+ "pub_test"
+
+ALTER PUBLICATION testpub_mix DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Tables from schemas:
+ "pub_test"
+Sequences:
+ "public.testpub_seq1"
+
+ALTER PUBLICATION testpub_mix DROP ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Sequences:
+ "public.testpub_seq1"
+
+DROP PUBLICATION testpub_mix;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE pub_test.testpub_seq2;
+-- make sure we replicate only the correct relation type
+CREATE SCHEMA pub_test1;
+CREATE SEQUENCE pub_test1.test_seq1;
+CREATE TABLE pub_test1.test_tbl1 (a int primary key, b int);
+CREATE SCHEMA pub_test2;
+CREATE SEQUENCE pub_test2.test_seq2;
+CREATE TABLE pub_test2.test_tbl2 (a int primary key, b int);
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_schemas;
+RESET client_min_messages;
+-- add tables from one schema, sequences from the other
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test2"
+Sequences from schemas:
+ "pub_test1"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- add the other object type from each schema
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test1;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test2;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+ "pub_test2"
+Sequences from schemas:
+ "pub_test1"
+ "pub_test2"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+ "testpub_schemas" (tables)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+ "testpub_schemas" (sequences)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- now drop the object type added first
+ALTER PUBLICATION testpub_schemas DROP ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas DROP ALL SEQUENCES IN SCHEMA pub_test1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+Sequences from schemas:
+ "pub_test2"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+
+-- should fail (publication contains the whole schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test1.test_tbl1;
+ERROR: cannot add relation "pub_test1.test_tbl1" to publication
+DETAIL: Table's schema "pub_test1" is already part of the publication or part of the specified schema list.
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test2.test_seq2;
+ERROR: cannot add relation "pub_test2.test_seq2" to publication
+DETAIL: Sequence's schema "pub_test2" is already part of the publication or part of the specified schema list.
+-- should work (different schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test1.test_seq1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "pub_test2.test_tbl2"
+Tables from schemas:
+ "pub_test1"
+Sequences:
+ "pub_test1.test_seq1"
+Sequences from schemas:
+ "pub_test2"
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- now drop the explicitly added objects again
+ALTER PUBLICATION testpub_schemas DROP TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas DROP SEQUENCE pub_test1.test_seq1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+Sequences from schemas:
+ "pub_test2"
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+
+DROP PUBLICATION testpub_schemas;
+DROP TABLE pub_test1.test_tbl1, pub_test2.test_tbl2;
+DROP SEQUENCE pub_test1.test_seq1, pub_test2.test_seq2;
+DROP SCHEMA pub_test1, pub_test2;
-- Tests for partitioned tables
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_forparted;
@@ -207,10 +703,10 @@ UPDATE testpub_parted1 SET a = 1;
-- only parent is listed as being in publication, not the partition
ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
\dRp+ testpub_forparted
- Publication testpub_forparted
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forparted
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_parted"
@@ -223,10 +719,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
UPDATE testpub_parted1 SET a = 1;
ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
\dRp+ testpub_forparted
- Publication testpub_forparted
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | t
+ Publication testpub_forparted
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | t
Tables:
"public.testpub_parted"
@@ -255,20 +751,20 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -276,10 +772,10 @@ Tables:
ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -287,10 +783,10 @@ Tables:
-- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
@@ -315,10 +811,10 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub_syntax1
- Publication testpub_syntax1
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub_syntax1
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl3" WHERE (e < 999)
@@ -328,10 +824,10 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub_syntax2
- Publication testpub_syntax2
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub_syntax2
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -636,10 +1132,10 @@ ERROR: relation "testpub_tbl1" is already member of publication "testpub_fortbl
CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
ERROR: publication "testpub_fortbl" already exists
\dRp+ testpub_fortbl
- Publication testpub_fortbl
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortbl
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test.testpub_nopk"
"public.testpub_tbl1"
@@ -677,10 +1173,10 @@ Publications:
"testpub_fortbl"
\dRp+ testpub_default
- Publication testpub_default
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | f | f
+ Publication testpub_default
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | f | f
Tables:
"pub_test.testpub_nopk"
"public.testpub_tbl1"
@@ -758,10 +1254,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
DROP TABLE testpub_parted;
DROP TABLE testpub_tbl1;
\dRp+ testpub_default
- Publication testpub_default
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | f | f
+ Publication testpub_default
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | f | f
(1 row)
-- fail - must be owner of publication
@@ -771,20 +1267,20 @@ ERROR: must be owner of publication testpub_default
RESET ROLE;
ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
\dRp testpub_foo
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpub_foo | regress_publication_user | f | f | t | t | t | f | t | f
(1 row)
-- rename back to keep the rest simple
ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
\dRp testpub_default
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpub_default | regress_publication_user2 | f | f | t | t | t | f | t | f
(1 row)
-- adding schemas and tables
@@ -800,19 +1296,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub1_forschema FOR ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
CREATE PUBLICATION testpub2_forschema FOR ALL TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -826,44 +1322,44 @@ CREATE PUBLICATION testpub6_forschema FOR ALL TABLES IN SCHEMA "CURRENT_SCHEMA",
CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
RESET client_min_messages;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"public"
\dRp+ testpub4_forschema
- Publication testpub4_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub4_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
\dRp+ testpub5_forschema
- Publication testpub5_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub5_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
"public"
\dRp+ testpub6_forschema
- Publication testpub6_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub6_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
"public"
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"CURRENT_SCHEMA.CURRENT_SCHEMA"
@@ -897,10 +1393,10 @@ ERROR: schema "testpub_view" does not exist
-- dropping the schema should reflect the change in publication
DROP SCHEMA pub_test3;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -908,20 +1404,20 @@ Tables from schemas:
-- renaming the schema should reflect the change in publication
ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1_renamed"
"pub_test2"
ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -929,10 +1425,10 @@ Tables from schemas:
-- alter publication add schema
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -941,10 +1437,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -953,10 +1449,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA pub_test1;
ERROR: schema "pub_test1" is already member of publication "testpub1_forschema"
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -964,10 +1460,10 @@ Tables from schemas:
-- alter publication drop schema
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -975,10 +1471,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test2;
ERROR: tables from schema "pub_test2" are not part of the publication
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -986,29 +1482,29 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
-- drop all schemas
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
(1 row)
-- alter publication set multiple schema
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA pub_test1, pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1017,10 +1513,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1029,10 +1525,10 @@ Tables from schemas:
-- removing the duplicate schemas
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA pub_test1, pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1102,18 +1598,18 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub3_forschema;
RESET client_min_messages;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
(1 row)
ALTER PUBLICATION testpub3_forschema SET ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1123,20 +1619,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR ALL TABLES IN SCHEMA pub_test1
CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, ALL TABLES IN SCHEMA pub_test1;
RESET client_min_messages;
\dRp+ testpub_forschema_fortable
- Publication testpub_forschema_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forschema_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test2.tbl1"
Tables from schemas:
"pub_test1"
\dRp+ testpub_fortable_forschema
- Publication testpub_fortable_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Sequences | Truncates | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test2.tbl1"
Tables from schemas:
@@ -1180,40 +1676,85 @@ CREATE SCHEMA sch1;
CREATE SCHEMA sch2;
CREATE TABLE sch1.tbl1 (a int) PARTITION BY RANGE(a);
CREATE TABLE sch2.tbl1_part1 PARTITION OF sch1.tbl1 FOR VALUES FROM (1) to (10);
+CREATE SEQUENCE sch1.seq1;
+CREATE SEQUENCE sch2.seq2;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+-----------
pub | sch1 | tbl1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
DROP PUBLICATION pub;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+(1 row)
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
@@ -1223,14 +1764,26 @@ SELECT * FROM pg_publication_tables;
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+(0 rows)
+
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
DROP TABLE sch2.tbl1_part1;
DROP TABLE sch1.tbl1;
@@ -1246,9 +1799,81 @@ SELECT * FROM pg_publication_tables;
pub | sch1 | tbl1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+(0 rows)
+
+DROP PUBLICATION pub;
+-- Schema publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR ALL SEQUENCES IN SCHEMA sch2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
+ALTER PUBLICATION pub DROP SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
RESET client_min_messages;
DROP PUBLICATION pub;
DROP TABLE sch1.tbl1;
+DROP SEQUENCE sch1.seq1, sch2.seq2;
DROP SCHEMA sch1 cascade;
DROP SCHEMA sch2 cascade;
RESET SESSION AUTHORIZATION;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 1420288d67b..8b95e1ffaaa 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1429,6 +1429,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gps.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/regress/sql/object_address.sql b/src/test/regress/sql/object_address.sql
index 2f40156eb48..f90afad8045 100644
--- a/src/test/regress/sql/object_address.sql
+++ b/src/test/regress/sql/object_address.sql
@@ -143,7 +143,6 @@ SELECT pg_get_object_address('publication', '{one}', '{}');
SELECT pg_get_object_address('publication', '{one,two}', '{}');
SELECT pg_get_object_address('subscription', '{one}', '{}');
SELECT pg_get_object_address('subscription', '{one,two}', '{}');
-
-- test successful cases
WITH objects (type, name, args) AS (VALUES
('table', '{addr_nsp, gentable}'::text[], '{}'::text[]),
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 3f04d34264a..5449bef8df9 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -27,7 +27,7 @@ CREATE PUBLICATION testpub_xxx WITH (publish_via_partition_root = 'true', publis
\dRp
-ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
+ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete, sequence');
\dRp
@@ -104,6 +104,179 @@ RESET client_min_messages;
DROP TABLE testpub_tbl3, testpub_tbl3a;
DROP PUBLICATION testpub3, testpub4;
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES WITH (publish = 'sequence');
+RESET client_min_messages;
+ALTER PUBLICATION testpub_forallsequences SET (publish = 'insert, sequence');
+
+CREATE SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences ADD SEQUENCE testpub_seq2;
+-- fail - can't drop from all sequences publication
+ALTER PUBLICATION testpub_forallsequences DROP SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences SET SEQUENCE pub_test.testpub_seq1;
+
+-- fail - can't add schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences ADD ALL SEQUENCES IN SCHEMA pub_test;
+-- fail - can't drop schema from 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences DROP ALL SEQUENCES IN SCHEMA pub_test;
+-- fail - can't set schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences SET ALL SEQUENCES IN SCHEMA pub_test;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forsequence FOR SEQUENCE testpub_seq0;
+RESET client_min_messages;
+-- should be able to add schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence ADD ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+-- should be able to drop schema from 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+-- should be able to set schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence SET ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forschema FOR ALL SEQUENCES IN SCHEMA pub_test;
+RESET client_min_messages;
+-- fail - can't create publication with schema and table of the same schema
+CREATE PUBLICATION testpub_for_seq_schema FOR ALL SEQUENCES IN SCHEMA pub_test, SEQUENCE pub_test.testpub_seq1;
+-- fail - can't add a sequence of the same schema to the schema publication
+ALTER PUBLICATION testpub_forschema ADD SEQUENCE pub_test.testpub_seq1;
+-- fail - can't drop a sequence from the schema publication which isn't in the
+-- publication
+ALTER PUBLICATION testpub_forschema DROP SEQUENCE pub_test.testpub_seq1;
+-- should be able to set sequence to schema publication
+ALTER PUBLICATION testpub_forschema SET SEQUENCE pub_test.testpub_seq1;
+\dRp+ testpub_forschema
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+\d+ pub_test.testpub_seq1
+\dRp+ testpub_forallsequences
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1, testpub_seq2;
+DROP PUBLICATION testpub_forallsequences, testpub_forsequence, testpub_forschema;
+
+-- Publication mixing tables and sequences
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_mix;
+RESET client_min_messages;
+
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE pub_test.testpub_seq2;
+
+ALTER PUBLICATION testpub_mix ADD SEQUENCE testpub_seq1, TABLE testpub_tbl1;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix ADD ALL SEQUENCES IN SCHEMA pub_test, ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix DROP ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+DROP PUBLICATION testpub_mix;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE pub_test.testpub_seq2;
+
+
+-- make sure we replicate only the correct relation type
+CREATE SCHEMA pub_test1;
+CREATE SEQUENCE pub_test1.test_seq1;
+CREATE TABLE pub_test1.test_tbl1 (a int primary key, b int);
+
+CREATE SCHEMA pub_test2;
+CREATE SEQUENCE pub_test2.test_seq2;
+CREATE TABLE pub_test2.test_tbl2 (a int primary key, b int);
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_schemas;
+RESET client_min_messages;
+
+-- add tables from one schema, sequences from the other
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test1;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- add the other object type from each schema
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test1;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test2;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- now drop the object type added first
+ALTER PUBLICATION testpub_schemas DROP ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas DROP ALL SEQUENCES IN SCHEMA pub_test1;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- should fail (publication contains the whole schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test1.test_tbl1;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test2.test_seq2;
+
+-- should work (different schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test1.test_seq1;
+
+\dRp+ testpub_schemas
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- now drop the explicitly added objects again
+ALTER PUBLICATION testpub_schemas DROP TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas DROP SEQUENCE pub_test1.test_seq1;
+
+\dRp+ testpub_schemas
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+DROP PUBLICATION testpub_schemas;
+DROP TABLE pub_test1.test_tbl1, pub_test2.test_tbl2;
+DROP SEQUENCE pub_test1.test_seq1, pub_test2.test_seq2;
+DROP SCHEMA pub_test1, pub_test2;
+
-- Tests for partitioned tables
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_forparted;
@@ -714,32 +887,51 @@ CREATE SCHEMA sch1;
CREATE SCHEMA sch2;
CREATE TABLE sch1.tbl1 (a int) PARTITION BY RANGE(a);
CREATE TABLE sch2.tbl1_part1 PARTITION OF sch1.tbl1 FOR VALUES FROM (1) to (10);
+CREATE SEQUENCE sch1.seq1;
+CREATE SEQUENCE sch2.seq2;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
DROP TABLE sch2.tbl1_part1;
@@ -752,10 +944,36 @@ CREATE TABLE sch1.tbl1_part3 (a int) PARTITION BY RANGE(a);
ALTER TABLE sch1.tbl1 ATTACH PARTITION sch1.tbl1_part3 FOR VALUES FROM (20) to (30);
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Schema publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR ALL SEQUENCES IN SCHEMA sch2;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub DROP SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
RESET client_min_messages;
DROP PUBLICATION pub;
DROP TABLE sch1.tbl1;
+DROP SEQUENCE sch1.seq1, sch2.seq2;
DROP SCHEMA sch1 cascade;
DROP SCHEMA sch2 cascade;
diff --git a/src/test/subscription/t/028_sequences.pl b/src/test/subscription/t/028_sequences.pl
new file mode 100644
index 00000000000..38434409463
--- /dev/null
+++ b/src/test/subscription/t/028_sequences.pl
@@ -0,0 +1,203 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More tests => 6;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub WITH (slot_name = seq_sub_slot)"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a rolled-back transaction - the rollback
+# does not wait for the replication, so we could see any intermediate state
+# so do something else after the test, to ensure we wait for everything
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+ INSERT INTO seq_test VALUES (-1);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the new sequence in a transaction, and roll it back - the rollback
+# does not wait for the replication, so we could see any intermediate state
+# so do something else after the test, to ensure we wait for everything
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+ INSERT INTO seq_test VALUES (-1);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+$node_subscriber->stop('fast');
+$node_publisher->stop('fast');
--
2.34.1
On 2/19/22 06:33, Amit Kapila wrote:
On Sat, Feb 19, 2022 at 7:48 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:Do we need to handle both FOR ALL TABLES and FOR ALL SEQUENCES at the
same time? That seems like a reasonable thing people might want.+1. That seems reasonable to me as well. I think the same will apply
to FOR ALL TABLES IN SCHEMA and FOR ALL SEQUENCES IN SCHEMA.
It already works for "IN SCHEMA" because that's handled as a publication
object, but FOR ALL TABLES and FOR ALL SEQUENCES are defined directly in
CreatePublicationStmt.
Which also means you can't do ALTER PUBLICATION and change it to FOR ALL
TABLES. Which is a bit annoying, but OK. It's a bit weird FOR ALL TABLES
is mentioned in docs for ALTER PUBLICATION as if it was supported.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Wed, Feb 23, 2022 at 4:54 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
7) This updates psql to do roughly the same thing as for tables, so \dRp
now list sequences added either directly or through schema, so you might
get footer like this:\dRp+ testpub_mix
...
Tables:
"public.testpub_tbl1"
Tables from schemas:
"pub_test"
Sequences:
"public.testpub_seq1"
Sequences from schemas:
"pub_test"Maybe it's a bit too verbose, though. It also addes "All sequences" and
"Sequences" columns into the publication description, but I don't think
that can be done much differently.FWIW I had to switch the describeOneTableDetails() chunk dealing with
sequences from printQuery() to printTable() in order to handle dynamic
footers.There's also a change in \dn+ because a schema may be included in one
publication as "FOR ALL SEQUENCES IN SCHEMA" and in another publication
with "FOR ALL TABLES IN SCHEMA". So I modified the output to\dn+ pub_test1
...
Publications:
"testpub_schemas" (sequences)
"testpub_schemas" (tables)But maybe it'd be better to aggregate this into a single line like
\dn+ pub_test1
...
Publications:
"testpub_schemas" (tables, sequences)Opinions?
I think the second one (aggregated) might be slightly better as that
will lead to a lesser number of lines when there are multiple such
publications but it should be okay if you and others prefer first.
8) There's a shortcoming in the current grammar, because you can specify
eitherCREATE PUBLICATION p FOR ALL TABLES;
or
CREATE PUBLICATION p FOR ALL SEQUENCES;
but it's not possible to do
CREATE PUBLICATION p FOR ALL TABLES AND FOR ALL SEQUENCES;
which seems like a fairly reasonable thing users might want to do.
Isn't it better to support this with a syntax as indicated by Tom in
one of his earlier emails on this topic [1]/messages/by-id/877603.1629120678@sss.pgh.pa.us? IIUC, it would be as
follows:
CREATE PUBLICATION p FOR ALL TABLES, ALL SEQUENCES;
The problem is that "FOR ALL TABLES" (and same for sequences) is
hard-coded in the grammar, not defined as PublicationObjSpec. This also
means you can't doALTER PUBLICATION p ADD ALL TABLES;
AFAICS there are two ways to fix this - adding the combinations into the
definition of CreatePublicationStmt, or adding FOR ALL TABLES (and
sequences) to PublicationObjSpec.
I can imagine that adding to PublicationObjSpec will look compatible
with existing code but maybe another way will also be okay.
[1]: /messages/by-id/877603.1629120678@sss.pgh.pa.us
--
With Regards,
Amit Kapila.
On 2/23/22 12:10, Amit Kapila wrote:
On Wed, Feb 23, 2022 at 4:54 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:7) This updates psql to do roughly the same thing as for tables, so \dRp
now list sequences added either directly or through schema, so you might
get footer like this:\dRp+ testpub_mix
...
Tables:
"public.testpub_tbl1"
Tables from schemas:
"pub_test"
Sequences:
"public.testpub_seq1"
Sequences from schemas:
"pub_test"Maybe it's a bit too verbose, though. It also addes "All sequences" and
"Sequences" columns into the publication description, but I don't think
that can be done much differently.FWIW I had to switch the describeOneTableDetails() chunk dealing with
sequences from printQuery() to printTable() in order to handle dynamic
footers.There's also a change in \dn+ because a schema may be included in one
publication as "FOR ALL SEQUENCES IN SCHEMA" and in another publication
with "FOR ALL TABLES IN SCHEMA". So I modified the output to\dn+ pub_test1
...
Publications:
"testpub_schemas" (sequences)
"testpub_schemas" (tables)But maybe it'd be better to aggregate this into a single line like
\dn+ pub_test1
...
Publications:
"testpub_schemas" (tables, sequences)Opinions?
I think the second one (aggregated) might be slightly better as that
will lead to a lesser number of lines when there are multiple such
publications but it should be okay if you and others prefer first.
Maybe, but I don't think it's very common to have that many schemas
added to the same publication. And it probably does not make much
difference whether you have 1000 or 2000 items in the list - either both
are acceptable or unacceptable, I think.
But I plan to look at this a bit more.
8) There's a shortcoming in the current grammar, because you can specify
eitherCREATE PUBLICATION p FOR ALL TABLES;
or
CREATE PUBLICATION p FOR ALL SEQUENCES;
but it's not possible to do
CREATE PUBLICATION p FOR ALL TABLES AND FOR ALL SEQUENCES;
which seems like a fairly reasonable thing users might want to do.
Isn't it better to support this with a syntax as indicated by Tom in
one of his earlier emails on this topic [1]? IIUC, it would be as
follows:CREATE PUBLICATION p FOR ALL TABLES, ALL SEQUENCES;
Yes. That's mostly what I meant by adding this to PublicationObjSpec.
The problem is that "FOR ALL TABLES" (and same for sequences) is
hard-coded in the grammar, not defined as PublicationObjSpec. This also
means you can't doALTER PUBLICATION p ADD ALL TABLES;
AFAICS there are two ways to fix this - adding the combinations into the
definition of CreatePublicationStmt, or adding FOR ALL TABLES (and
sequences) to PublicationObjSpec.I can imagine that adding to PublicationObjSpec will look compatible
with existing code but maybe another way will also be okay.
I think just hard-coding this into CreatePublicationStmt would work, but
it'll be an issue once/if we start adding more options. I'm not sure if
it makes sense to replicate other relkinds, but maybe DDL?
I'll try tweaking PublicationObjSpec, and we'll see.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Wed, Feb 23, 2022, at 1:07 PM, Tomas Vondra wrote:
Maybe, but I don't think it's very common to have that many schemas
added to the same publication. And it probably does not make much
difference whether you have 1000 or 2000 items in the list - either both
are acceptable or unacceptable, I think.
Wouldn't it confuse users? Hey, duplicate publication. How? Wait. Doh.
I think just hard-coding this into CreatePublicationStmt would work, but
it'll be an issue once/if we start adding more options. I'm not sure if
it makes sense to replicate other relkinds, but maybe DDL?
Materialized view? As you mentioned DDL, maybe we can use the CREATE
PUBLICATION syntax to select which DDL commands we want to replicate.
--
Euler Taveira
EDB https://www.enterprisedb.com/
On 2/23/22 18:33, Euler Taveira wrote:
On Wed, Feb 23, 2022, at 1:07 PM, Tomas Vondra wrote:
Maybe, but I don't think it's very common to have that many
schemas added to the same publication. And it probably does not
make much difference whether you have 1000 or 2000 items in the
list - either both are acceptable or unacceptable, I think.Wouldn't it confuse users? Hey, duplicate publication. How? Wait.
Doh.
I don't follow. Duplicate publications? This talks about rows in
pg_publication_namespace, not pg_publication.
I think just hard-coding this into CreatePublicationStmt would
work, but it'll be an issue once/if we start adding more options.
I'm not sure if it makes sense to replicate other relkinds, but
maybe DDL?Materialized view? As you mentioned DDL, maybe we can use the CREATE
PUBLICATION syntax to select which DDL commands we want to
replicate.
Well, yeah. But that doesn't really say whether to hard-code it into the
CREATE PUBLICATION syntax or cover it by PublicationObjSpec. Hard-coding
it also means we can't ALTER the publication to be FOR ALL TABLES etc.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Wed, Feb 23, 2022, at 4:18 PM, Tomas Vondra wrote:
On 2/23/22 18:33, Euler Taveira wrote:
On Wed, Feb 23, 2022, at 1:07 PM, Tomas Vondra wrote:
Maybe, but I don't think it's very common to have that many
schemas added to the same publication. And it probably does not
make much difference whether you have 1000 or 2000 items in the
list - either both are acceptable or unacceptable, I think.Wouldn't it confuse users? Hey, duplicate publication. How? Wait.
Doh.I don't follow. Duplicate publications? This talks about rows in
pg_publication_namespace, not pg_publication.
I was referring to
Publications:
"testpub_schemas" (sequences)
"testpub_schemas" (tables)
versus
Publications:
"testpub_schemas" (tables, sequences)
I prefer the latter.
--
Euler Taveira
EDB https://www.enterprisedb.com/
On 23.02.22 12:10, Amit Kapila wrote:
Isn't it better to support this with a syntax as indicated by Tom in
one of his earlier emails on this topic [1]? IIUC, it would be as
follows:CREATE PUBLICATION p FOR ALL TABLES, ALL SEQUENCES;
I don't think there is any point in supporting this. What FOR ALL
TABLES was really supposed to mean was "everything you can get your
hands on". I think we should just redefine FOR ALL TABLES to mean that,
maybe replace it with a different syntax. If you want to exclude
sequences for some reason, there is already a publication option for
that. And FOR ALL SEQUENCES by itself doesn't make any sense in practice.
Are there any other object types besides tables and sequences that we
might want to logically-replicate in the future and whose possible
syntax we should think about? I can't think of anything.
On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
On 2/10/22 19:17, Tomas Vondra wrote:
I've polished & pushed the first part adding sequence decoding
infrastructure etc. Attached are the two remaining parts.I plan to wait a day or two and then push the test_decoding part. The
last part (for built-in replication) will need more work and maybe
rethinking the grammar etc.I've pushed the second part, adding sequences to test_decoding.
The test_decoding is failing randomly in the last few days. I am not
completely sure but they might be related to this work. The two of
these appears to be due to the same reason:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07
TRAP: FailedAssertion("prev_first_lsn < cur_txn->first_lsn", File:
"reorderbuffer.c", Line: 1173, PID: 35013)
0 postgres 0x00593de0 ExceptionalCondition + 160\\0
Another:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2022-02-16%2006%3A21%3A48
--- /home/nm/farm/xlc32/HEAD/pgsql.build/contrib/test_decoding/expected/rewrite.out
2022-02-14 20:19:14.000000000 +0000
+++ /home/nm/farm/xlc32/HEAD/pgsql.build/contrib/test_decoding/results/rewrite.out
2022-02-16 07:42:18.000000000 +0000
@@ -126,6 +126,7 @@
table public.replication_example: INSERT: id[integer]:4
somedata[integer]:3 text[character varying]:null
testcolumn1[integer]:null
table public.replication_example: INSERT: id[integer]:5
somedata[integer]:4 text[character varying]:null
testcolumn1[integer]:2 testcolumn2[integer]:1
COMMIT
+ sequence public.replication_example_id_seq: transactional:0
last_value: 38 log_cnt: 0 is_called:1
BEGIN
table public.replication_example: INSERT: id[integer]:6
somedata[integer]:5 text[character varying]:null
testcolumn1[integer]:3 testcolumn2[integer]:null
COMMIT
@@ -133,7 +134,7 @@
table public.replication_example: INSERT: id[integer]:7
somedata[integer]:6 text[character varying]:null
testcolumn1[integer]:4 testcolumn2[integer]:null
table public.replication_example: INSERT: id[integer]:8
somedata[integer]:7 text[character varying]:null
testcolumn1[integer]:5 testcolumn2[integer]:null
testcolumn3[integer]:1
COMMIT
- (15 rows)
+ (16 rows)
--
With Regards,
Amit Kapila.
On Mon, Feb 28, 2022 at 5:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 2/10/22 19:17, Tomas Vondra wrote:
I've polished & pushed the first part adding sequence decoding
infrastructure etc. Attached are the two remaining parts.I plan to wait a day or two and then push the test_decoding part. The
last part (for built-in replication) will need more work and maybe
rethinking the grammar etc.I've pushed the second part, adding sequences to test_decoding.
The test_decoding is failing randomly in the last few days. I am not
completely sure but they might be related to this work. The two of
these appears to be due to the same reason:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07TRAP: FailedAssertion("prev_first_lsn < cur_txn->first_lsn", File:
"reorderbuffer.c", Line: 1173, PID: 35013)
0 postgres 0x00593de0 ExceptionalCondition + 160\\0
While reviewing the code for this, I noticed that in
sequence_decode(), we don't call ReorderBufferProcessXid to register
the first known lsn in WAL for the current xid. The similar functions
logicalmsg_decode() or heap_decode() do call ReorderBufferProcessXid
even if they decide not to queue or send the change. Is there a reason
for not doing the same here? However, I am not able to deduce any
scenario where lack of this will lead to such an Assertion failure.
Any thoughts?
--
With Regards,
Amit Kapila.
On Tue, Mar 1, 2022 at 10:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Mon, Feb 28, 2022 at 5:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 2/10/22 19:17, Tomas Vondra wrote:
I've polished & pushed the first part adding sequence decoding
infrastructure etc. Attached are the two remaining parts.I plan to wait a day or two and then push the test_decoding part. The
last part (for built-in replication) will need more work and maybe
rethinking the grammar etc.I've pushed the second part, adding sequences to test_decoding.
The test_decoding is failing randomly in the last few days. I am not
completely sure but they might be related to this work. The two of
these appears to be due to the same reason:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07TRAP: FailedAssertion("prev_first_lsn < cur_txn->first_lsn", File:
"reorderbuffer.c", Line: 1173, PID: 35013)
0 postgres 0x00593de0 ExceptionalCondition + 160\\0
FYI, it looks like the same assertion has failed again on the same
build-farm machine [1]https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-03-03%2023%3A14%3A26
While reviewing the code for this, I noticed that in
sequence_decode(), we don't call ReorderBufferProcessXid to register
the first known lsn in WAL for the current xid. The similar functions
logicalmsg_decode() or heap_decode() do call ReorderBufferProcessXid
even if they decide not to queue or send the change. Is there a reason
for not doing the same here? However, I am not able to deduce any
scenario where lack of this will lead to such an Assertion failure.
Any thoughts?
------
[1]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-03-03%2023%3A14%3A26
Kind Regards,
Peter Smith.
Fujitsu Australia
On 3/1/22 12:53, Amit Kapila wrote:
On Mon, Feb 28, 2022 at 5:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 2/10/22 19:17, Tomas Vondra wrote:
I've polished & pushed the first part adding sequence decoding
infrastructure etc. Attached are the two remaining parts.I plan to wait a day or two and then push the test_decoding part. The
last part (for built-in replication) will need more work and maybe
rethinking the grammar etc.I've pushed the second part, adding sequences to test_decoding.
The test_decoding is failing randomly in the last few days. I am not
completely sure but they might be related to this work. The two of
these appears to be due to the same reason:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07TRAP: FailedAssertion("prev_first_lsn < cur_txn->first_lsn", File:
"reorderbuffer.c", Line: 1173, PID: 35013)
0 postgres 0x00593de0 ExceptionalCondition + 160\\0While reviewing the code for this, I noticed that in
sequence_decode(), we don't call ReorderBufferProcessXid to register
the first known lsn in WAL for the current xid. The similar functions
logicalmsg_decode() or heap_decode() do call ReorderBufferProcessXid
even if they decide not to queue or send the change. Is there a reason
for not doing the same here? However, I am not able to deduce any
scenario where lack of this will lead to such an Assertion failure.
Any thoughts?
Thanks, that seems like an omission. Will fix.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 2/28/22 12:46, Amit Kapila wrote:
On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 2/10/22 19:17, Tomas Vondra wrote:
I've polished & pushed the first part adding sequence decoding
infrastructure etc. Attached are the two remaining parts.I plan to wait a day or two and then push the test_decoding part. The
last part (for built-in replication) will need more work and maybe
rethinking the grammar etc.I've pushed the second part, adding sequences to test_decoding.
The test_decoding is failing randomly in the last few days. I am not
completely sure but they might be related to this work. The two of
these appears to be due to the same reason:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07TRAP: FailedAssertion("prev_first_lsn < cur_txn->first_lsn", File:
"reorderbuffer.c", Line: 1173, PID: 35013)
0 postgres 0x00593de0 ExceptionalCondition + 160\\0
This might be related to the issue reported by Amit, i.e. that
sequence_decode does not call ReorderBufferProcessXid(). If this keeps
failing, we'll have to add some extra debug info (logging LSN etc.), at
least temporarily. It'd be valuable to inspect the WAL too.
Another:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2022-02-16%2006%3A21%3A48--- /home/nm/farm/xlc32/HEAD/pgsql.build/contrib/test_decoding/expected/rewrite.out 2022-02-14 20:19:14.000000000 +0000 +++ /home/nm/farm/xlc32/HEAD/pgsql.build/contrib/test_decoding/results/rewrite.out 2022-02-16 07:42:18.000000000 +0000 @@ -126,6 +126,7 @@ table public.replication_example: INSERT: id[integer]:4 somedata[integer]:3 text[character varying]:null testcolumn1[integer]:null table public.replication_example: INSERT: id[integer]:5 somedata[integer]:4 text[character varying]:null testcolumn1[integer]:2 testcolumn2[integer]:1 COMMIT + sequence public.replication_example_id_seq: transactional:0 last_value: 38 log_cnt: 0 is_called:1 BEGIN table public.replication_example: INSERT: id[integer]:6 somedata[integer]:5 text[character varying]:null testcolumn1[integer]:3 testcolumn2[integer]:null COMMIT @@ -133,7 +134,7 @@ table public.replication_example: INSERT: id[integer]:7 somedata[integer]:6 text[character varying]:null testcolumn1[integer]:4 testcolumn2[integer]:null table public.replication_example: INSERT: id[integer]:8 somedata[integer]:7 text[character varying]:null testcolumn1[integer]:5 testcolumn2[integer]:null testcolumn3[integer]:1 COMMIT - (15 rows) + (16 rows)
Interesting. I can think of one reason that might cause this - we log
the first sequence increment after a checkpoint. So if a checkpoint
happens in an unfortunate place, there'll be an extra WAL record. On
slow / busy machines that's quite possible, I guess.
I wonder if these two issues might be related ...
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 3/7/22 17:39, Tomas Vondra wrote:
On 3/1/22 12:53, Amit Kapila wrote:
On Mon, Feb 28, 2022 at 5:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 2/10/22 19:17, Tomas Vondra wrote:
I've polished & pushed the first part adding sequence decoding
infrastructure etc. Attached are the two remaining parts.I plan to wait a day or two and then push the test_decoding part. The
last part (for built-in replication) will need more work and maybe
rethinking the grammar etc.I've pushed the second part, adding sequences to test_decoding.
The test_decoding is failing randomly in the last few days. I am not
completely sure but they might be related to this work. The two of
these appears to be due to the same reason:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07TRAP: FailedAssertion("prev_first_lsn < cur_txn->first_lsn", File:
"reorderbuffer.c", Line: 1173, PID: 35013)
0 postgres 0x00593de0 ExceptionalCondition + 160\\0While reviewing the code for this, I noticed that in
sequence_decode(), we don't call ReorderBufferProcessXid to register
the first known lsn in WAL for the current xid. The similar functions
logicalmsg_decode() or heap_decode() do call ReorderBufferProcessXid
even if they decide not to queue or send the change. Is there a reason
for not doing the same here? However, I am not able to deduce any
scenario where lack of this will lead to such an Assertion failure.
Any thoughts?Thanks, that seems like an omission. Will fix.
I've pushed this simple fix. Not sure it'll fix the assert failures on
skink/locust, though. Given the lack of information it'll be difficult
to verify. So let's wait a bit.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 3/7/22 17:53, Tomas Vondra wrote:
On 2/28/22 12:46, Amit Kapila wrote:
On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 2/10/22 19:17, Tomas Vondra wrote:
I've polished & pushed the first part adding sequence decoding
infrastructure etc. Attached are the two remaining parts.I plan to wait a day or two and then push the test_decoding part. The
last part (for built-in replication) will need more work and maybe
rethinking the grammar etc.I've pushed the second part, adding sequences to test_decoding.
The test_decoding is failing randomly in the last few days. I am not
completely sure but they might be related to this work. The two of
these appears to be due to the same reason:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07TRAP: FailedAssertion("prev_first_lsn < cur_txn->first_lsn", File:
"reorderbuffer.c", Line: 1173, PID: 35013)
0 postgres 0x00593de0 ExceptionalCondition + 160\\0This might be related to the issue reported by Amit, i.e. that
sequence_decode does not call ReorderBufferProcessXid(). If this keeps
failing, we'll have to add some extra debug info (logging LSN etc.), at
least temporarily. It'd be valuable to inspect the WAL too.Another:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2022-02-16%2006%3A21%3A48--- /home/nm/farm/xlc32/HEAD/pgsql.build/contrib/test_decoding/expected/rewrite.out 2022-02-14 20:19:14.000000000 +0000 +++ /home/nm/farm/xlc32/HEAD/pgsql.build/contrib/test_decoding/results/rewrite.out 2022-02-16 07:42:18.000000000 +0000 @@ -126,6 +126,7 @@ table public.replication_example: INSERT: id[integer]:4 somedata[integer]:3 text[character varying]:null testcolumn1[integer]:null table public.replication_example: INSERT: id[integer]:5 somedata[integer]:4 text[character varying]:null testcolumn1[integer]:2 testcolumn2[integer]:1 COMMIT + sequence public.replication_example_id_seq: transactional:0 last_value: 38 log_cnt: 0 is_called:1 BEGIN table public.replication_example: INSERT: id[integer]:6 somedata[integer]:5 text[character varying]:null testcolumn1[integer]:3 testcolumn2[integer]:null COMMIT @@ -133,7 +134,7 @@ table public.replication_example: INSERT: id[integer]:7 somedata[integer]:6 text[character varying]:null testcolumn1[integer]:4 testcolumn2[integer]:null table public.replication_example: INSERT: id[integer]:8 somedata[integer]:7 text[character varying]:null testcolumn1[integer]:5 testcolumn2[integer]:null testcolumn3[integer]:1 COMMIT - (15 rows) + (16 rows)Interesting. I can think of one reason that might cause this - we log
the first sequence increment after a checkpoint. So if a checkpoint
happens in an unfortunate place, there'll be an extra WAL record. On
slow / busy machines that's quite possible, I guess.
I've tweaked the checkpoint_interval to make checkpoints more aggressive
(set it to 1s), and it seems my hunch was correct - it produces failures
exactly like this one. The best fix probably is to just disable decoding
of sequences in those tests that are not aimed at testing sequence decoding.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 3/7/22 22:25, Tomas Vondra wrote:
On 3/7/22 17:53, Tomas Vondra wrote:
On 2/28/22 12:46, Amit Kapila wrote:
On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 2/10/22 19:17, Tomas Vondra wrote:
I've polished & pushed the first part adding sequence decoding
infrastructure etc. Attached are the two remaining parts.I plan to wait a day or two and then push the test_decoding part. The
last part (for built-in replication) will need more work and maybe
rethinking the grammar etc.I've pushed the second part, adding sequences to test_decoding.
The test_decoding is failing randomly in the last few days. I am not
completely sure but they might be related to this work. The two of
these appears to be due to the same reason:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07TRAP: FailedAssertion("prev_first_lsn < cur_txn->first_lsn", File:
"reorderbuffer.c", Line: 1173, PID: 35013)
0 postgres 0x00593de0 ExceptionalCondition + 160\\0This might be related to the issue reported by Amit, i.e. that
sequence_decode does not call ReorderBufferProcessXid(). If this keeps
failing, we'll have to add some extra debug info (logging LSN etc.), at
least temporarily. It'd be valuable to inspect the WAL too.Another:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2022-02-16%2006%3A21%3A48--- /home/nm/farm/xlc32/HEAD/pgsql.build/contrib/test_decoding/expected/rewrite.out 2022-02-14 20:19:14.000000000 +0000 +++ /home/nm/farm/xlc32/HEAD/pgsql.build/contrib/test_decoding/results/rewrite.out 2022-02-16 07:42:18.000000000 +0000 @@ -126,6 +126,7 @@ table public.replication_example: INSERT: id[integer]:4 somedata[integer]:3 text[character varying]:null testcolumn1[integer]:null table public.replication_example: INSERT: id[integer]:5 somedata[integer]:4 text[character varying]:null testcolumn1[integer]:2 testcolumn2[integer]:1 COMMIT + sequence public.replication_example_id_seq: transactional:0 last_value: 38 log_cnt: 0 is_called:1 BEGIN table public.replication_example: INSERT: id[integer]:6 somedata[integer]:5 text[character varying]:null testcolumn1[integer]:3 testcolumn2[integer]:null COMMIT @@ -133,7 +134,7 @@ table public.replication_example: INSERT: id[integer]:7 somedata[integer]:6 text[character varying]:null testcolumn1[integer]:4 testcolumn2[integer]:null table public.replication_example: INSERT: id[integer]:8 somedata[integer]:7 text[character varying]:null testcolumn1[integer]:5 testcolumn2[integer]:null testcolumn3[integer]:1 COMMIT - (15 rows) + (16 rows)Interesting. I can think of one reason that might cause this - we log
the first sequence increment after a checkpoint. So if a checkpoint
happens in an unfortunate place, there'll be an extra WAL record. On
slow / busy machines that's quite possible, I guess.I've tweaked the checkpoint_interval to make checkpoints more aggressive
(set it to 1s), and it seems my hunch was correct - it produces failures
exactly like this one. The best fix probably is to just disable decoding
of sequences in those tests that are not aimed at testing sequence decoding.
I've pushed a fix for this, adding "include-sequences=0" to a couple
test_decoding tests, which were failing with concurrent checkpoints.
Unfortunately, I realized we have a similar issue in the "sequences"
tests too :-( Imagine you do a series of sequence increments, e.g.
SELECT nextval('s') FROM generate_sequences(1,100);
If there's a concurrent checkpoint, this may add an extra WAL record,
affecting the decoded output (and also the data stored in the sequence
relation itself). Not sure what to do about this ...
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 3/7/22 22:11, Tomas Vondra wrote:
On 3/7/22 17:39, Tomas Vondra wrote:
On 3/1/22 12:53, Amit Kapila wrote:
On Mon, Feb 28, 2022 at 5:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Sat, Feb 12, 2022 at 6:04 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 2/10/22 19:17, Tomas Vondra wrote:
I've polished & pushed the first part adding sequence decoding
infrastructure etc. Attached are the two remaining parts.I plan to wait a day or two and then push the test_decoding part. The
last part (for built-in replication) will need more work and maybe
rethinking the grammar etc.I've pushed the second part, adding sequences to test_decoding.
The test_decoding is failing randomly in the last few days. I am not
completely sure but they might be related to this work. The two of
these appears to be due to the same reason:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-17%2015%3A17%3A07TRAP: FailedAssertion("prev_first_lsn < cur_txn->first_lsn", File:
"reorderbuffer.c", Line: 1173, PID: 35013)
0 postgres 0x00593de0 ExceptionalCondition + 160\\0While reviewing the code for this, I noticed that in
sequence_decode(), we don't call ReorderBufferProcessXid to register
the first known lsn in WAL for the current xid. The similar functions
logicalmsg_decode() or heap_decode() do call ReorderBufferProcessXid
even if they decide not to queue or send the change. Is there a reason
for not doing the same here? However, I am not able to deduce any
scenario where lack of this will lead to such an Assertion failure.
Any thoughts?Thanks, that seems like an omission. Will fix.
I've pushed this simple fix. Not sure it'll fix the assert failures on
skink/locust, though. Given the lack of information it'll be difficult
to verify. So let's wait a bit.
I've done about 5000 runs of 'make check' in test_decoding, on two rpi
machines (one armv7, one aarch64). Not a single assert failure :-(
How come skink/locust hit that in just a couple runs?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Wed, Mar 9, 2022 at 4:14 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
On 3/7/22 22:11, Tomas Vondra wrote:
I've pushed this simple fix. Not sure it'll fix the assert failures on
skink/locust, though. Given the lack of information it'll be difficult
to verify. So let's wait a bit.I've done about 5000 runs of 'make check' in test_decoding, on two rpi
machines (one armv7, one aarch64). Not a single assert failure :-(How come skink/locust hit that in just a couple runs?
Is it failed after you pushed a fix? I don't think so or am I missing
something? I feel even if doesn't occur again it would have been
better if we had some theory on how it occurred in the first place
because that would make us feel more confident that we won't have any
related problem left.
--
With Regards,
Amit Kapila.
On 3/9/22 12:41, Amit Kapila wrote:
On Wed, Mar 9, 2022 at 4:14 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 3/7/22 22:11, Tomas Vondra wrote:
I've pushed this simple fix. Not sure it'll fix the assert failures on
skink/locust, though. Given the lack of information it'll be difficult
to verify. So let's wait a bit.I've done about 5000 runs of 'make check' in test_decoding, on two rpi
machines (one armv7, one aarch64). Not a single assert failure :-(How come skink/locust hit that in just a couple runs?
Is it failed after you pushed a fix? I don't think so or am I missing
something? I feel even if doesn't occur again it would have been
better if we had some theory on how it occurred in the first place
because that would make us feel more confident that we won't have any
related problem left.
I don't think it failed yet - we have to wait a bit longer to make any
conclusions, though. On skink it failed only twice over 1 month. I agree
it'd be nice to have some theory, but I really don't have one.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 23.02.22 00:24, Tomas Vondra wrote:
Here's an updated version of the patch, fixing most of the issues from
reviews so far. There's still a couple FIXME comments, but I think those
are minor and/or straightforward to deal with.
This patch needs a rebase because of a conflict in
expected/publication.out. In addition, see the attached fixup patch to
get the pg_dump tests passing (and some other stuff).
028_sequences.pl should be renamed to 029, since there is now another 028.
In psql, the output of \dRp and \dRp+ is inconsistent. The former shows
All tables | All sequences | Inserts | Updates | Deletes | Truncates |
Sequences | Via root
the latter shows
All tables | All sequences | Inserts | Updates | Deletes | Sequences |
Truncates | Via root
I think the first order is the best one.
That's all for now, I'll come back with more reviewing later.
Attachments:
0001-fixup-Add-support-for-decoding-sequences-to-built-in.patchtext/plain; charset=UTF-8; name=0001-fixup-Add-support-for-decoding-sequences-to-built-in.patchDownload
From 43b0aa50d621f3b640e4285e4e329c361988a6c4 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter@eisentraut.org>
Date: Thu, 10 Mar 2022 12:02:56 +0100
Subject: [PATCH] fixup! Add support for decoding sequences to built-in
replication
---
src/bin/pg_dump/t/002_pg_dump.pl | 2 +-
src/interfaces/libpq/fe-exec.c | 9 ---------
src/test/subscription/t/028_sequences.pl | 5 ++---
3 files changed, 3 insertions(+), 13 deletions(-)
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 14b89c62f7..1bcd845254 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2398,7 +2398,7 @@
create_order => 50,
create_sql => 'CREATE PUBLICATION pub4;',
regexp => qr/^
- \QCREATE PUBLICATION pub4 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub4 WITH (publish = 'insert, update, delete, truncate, sequence');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index 2b81065c47..0c39bc9abf 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -3733,10 +3733,7 @@ char *
PQgetvalue(const PGresult *res, int tup_num, int field_num)
{
if (!check_tuple_field_number(res, tup_num, field_num))
- {
- printf("getvalue\n");
return NULL;
- }
return res->tuples[tup_num][field_num].value;
}
@@ -3747,10 +3744,7 @@ int
PQgetlength(const PGresult *res, int tup_num, int field_num)
{
if (!check_tuple_field_number(res, tup_num, field_num))
- {
- printf("getlength\n");
return 0;
- }
if (res->tuples[tup_num][field_num].len != NULL_LEN)
return res->tuples[tup_num][field_num].len;
else
@@ -3764,10 +3758,7 @@ int
PQgetisnull(const PGresult *res, int tup_num, int field_num)
{
if (!check_tuple_field_number(res, tup_num, field_num))
- {
- printf("getisnull\n");
return 1; /* pretend it is null */
- }
if (res->tuples[tup_num][field_num].len == NULL_LEN)
return 1;
else
diff --git a/src/test/subscription/t/028_sequences.pl b/src/test/subscription/t/028_sequences.pl
index 3843440946..cdd7f7f344 100644
--- a/src/test/subscription/t/028_sequences.pl
+++ b/src/test/subscription/t/028_sequences.pl
@@ -6,7 +6,7 @@
use warnings;
use PostgreSQL::Test::Cluster;
use PostgreSQL::Test::Utils;
-use Test::More tests => 6;
+use Test::More;
# Initialize publisher node
my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
@@ -199,5 +199,4 @@
'check replicated sequence values on subscriber');
-$node_subscriber->stop('fast');
-$node_publisher->stop('fast');
+done_testing();
--
2.35.1
On 3/10/22 12:07, Peter Eisentraut wrote:
On 23.02.22 00:24, Tomas Vondra wrote:
Here's an updated version of the patch, fixing most of the issues from
reviews so far. There's still a couple FIXME comments, but I think those
are minor and/or straightforward to deal with.This patch needs a rebase because of a conflict in
expected/publication.out. In addition, see the attached fixup patch to
get the pg_dump tests passing (and some other stuff).
OK, rebased patch attached.
028_sequences.pl should be renamed to 029, since there is now another 028.
Renamed.
In psql, the output of \dRp and \dRp+ is inconsistent. The former shows
All tables | All sequences | Inserts | Updates | Deletes | Truncates |
Sequences | Via rootthe latter shows
All tables | All sequences | Inserts | Updates | Deletes | Sequences |
Truncates | Via rootI think the first order is the best one.
Good idea, I've tweaked the code to use the former order.
That's all for now, I'll come back with more reviewing later.
thanks
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Add-support-for-decoding-sequences-to-built-20220310.patchtext/x-patch; charset=UTF-8; name=0001-Add-support-for-decoding-sequences-to-built-20220310.patchDownload
From 3f362f348f59aa828a8e2b7abe581b3baeb3c425 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Thu, 10 Mar 2022 22:36:43 +0100
Subject: [PATCH] Add support for decoding sequences to built-in replication
---
doc/src/sgml/catalogs.sgml | 81 ++
doc/src/sgml/ref/alter_publication.sgml | 24 +-
doc/src/sgml/ref/alter_subscription.sgml | 3 +-
doc/src/sgml/ref/create_publication.sgml | 3 +
src/backend/catalog/objectaddress.c | 5 +-
src/backend/catalog/pg_publication.c | 236 ++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 380 +++++--
src/backend/commands/sequence.c | 79 ++
src/backend/commands/subscriptioncmds.c | 272 +++++
src/backend/executor/execReplication.c | 2 +-
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 1 +
src/backend/parser/gram.y | 40 +
src/backend/replication/logical/proto.c | 52 +
src/backend/replication/logical/tablesync.c | 118 ++-
src/backend/replication/logical/worker.c | 56 +
src/backend/replication/pgoutput/pgoutput.c | 82 +-
src/backend/utils/cache/relcache.c | 13 +-
src/backend/utils/cache/syscache.c | 6 +-
src/bin/pg_dump/pg_dump.c | 65 +-
src/bin/pg_dump/pg_dump.h | 3 +
src/bin/pg_dump/t/002_pg_dump.pl | 38 +-
src/bin/psql/describe.c | 287 +++--
src/bin/psql/tab-complete.c | 12 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 25 +-
.../catalog/pg_publication_namespace.h | 3 +-
src/include/commands/sequence.h | 1 +
src/include/nodes/parsenodes.h | 6 +
src/include/replication/logicalproto.h | 19 +
src/include/replication/pgoutput.h | 1 +
src/test/regress/expected/publication.out | 999 ++++++++++++++----
src/test/regress/expected/rules.out | 8 +
src/test/regress/sql/object_address.sql | 1 -
src/test/regress/sql/publication.sql | 220 +++-
src/test/subscription/t/029_sequences.pl | 202 ++++
37 files changed, 2974 insertions(+), 385 deletions(-)
create mode 100644 src/test/subscription/t/029_sequences.pl
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 83987a99045..324fc1050d3 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6263,6 +6263,16 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
Reference to schema
</para></entry>
</row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pnsequences</structfield> <type>bool</type>
+ Determines whether to include tables or sequences from this schema.
+ </para>
+ <para>
+ Reference to schema
+ </para></entry>
+ </row>
</tbody>
</tgroup>
</table>
@@ -9560,6 +9570,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11395,6 +11410,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index 32b75f6c78e..36c9a5f438a 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -58,7 +60,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -122,6 +135,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><replaceable class="parameter">schema_name</replaceable></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 0d6f064f58d..264c9c5a10f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -147,7 +147,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ replication of tables and sequences that were added to the subscribed-to publications
since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -168,6 +168,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<para>
Previously subscribed tables are not copied, even if a table's row
filter <literal>WHERE</literal> clause has since been modified.
+ Previously-subscribed sequences are not copied either.
</para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 4979b9b646d..f72318e97dd 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -23,13 +23,16 @@ PostgreSQL documentation
<synopsis>
CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
[ FOR ALL TABLES
+ | FOR ALL SEQUENCES
| FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
[ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c
index f30c742d48f..301eeb7caca 100644
--- a/src/backend/catalog/objectaddress.c
+++ b/src/backend/catalog/objectaddress.c
@@ -1976,10 +1976,11 @@ get_object_address_publication_schema(List *object, bool missing_ok)
/* Find the publication schema mapping in syscache */
address.objectId =
- GetSysCacheOid2(PUBLICATIONNAMESPACEMAP,
+ GetSysCacheOid3(PUBLICATIONNAMESPACEMAP,
Anum_pg_publication_namespace_oid,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pub->oid));
+ ObjectIdGetDatum(pub->oid),
+ BoolGetDatum(false)); /* FIXME */
if (!OidIsValid(address.objectId) && !missing_ok)
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 25998fbb39b..d866e8a9b22 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -54,7 +54,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -131,7 +132,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -205,7 +207,7 @@ is_schema_publication(Oid pubid)
ObjectIdGetDatum(pubid));
scan = systable_beginscan(pubschsrel,
- PublicationNamespacePnnspidPnpubidIndexId,
+ PublicationNamespacePnnspidPnpubidSeqIndexId,
true, NULL, 1, &scankey);
tup = systable_getnext(scan);
result = HeapTupleIsValid(tup);
@@ -419,7 +421,7 @@ publication_add_relation(Oid pubid, PublicationRelInfo *pri,
* Insert new publication / schema mapping.
*/
ObjectAddress
-publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
+publication_add_schema(Oid pubid, Oid schemaid, bool sequences, bool if_not_exists)
{
Relation rel;
HeapTuple tup;
@@ -438,9 +440,10 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
* duplicates, it's here just to provide nicer error message in common
* case. The real protection is the unique key on the catalog.
*/
- if (SearchSysCacheExists2(PUBLICATIONNAMESPACEMAP,
+ if (SearchSysCacheExists3(PUBLICATIONNAMESPACEMAP,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pubid)))
+ ObjectIdGetDatum(pubid),
+ BoolGetDatum(sequences)))
{
table_close(rel, RowExclusiveLock);
@@ -466,6 +469,8 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
ObjectIdGetDatum(pubid);
values[Anum_pg_publication_namespace_pnnspid - 1] =
ObjectIdGetDatum(schemaid);
+ values[Anum_pg_publication_namespace_pnsequences - 1] =
+ BoolGetDatum(sequences);
tup = heap_form_tuple(RelationGetDescr(rel), values, nulls);
@@ -492,6 +497,7 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
* partitions.
*/
schemaRels = GetSchemaPublicationRelations(schemaid,
+ sequences,
PUBLICATION_PART_ALL);
InvalidatePublicationRels(schemaRels);
@@ -525,11 +531,14 @@ GetRelationPublications(Oid relid)
/*
* Gets list of relation oids for a publication.
*
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE / FOR SEQUENCE publications, the FOR
+ * ALL TABLES / SEQUENCES should use GetAllTablesPublicationRelations()
+ * and GetAllSequencesPublicationRelations().
+ *
+ * XXX It's a bit weird the pub_partopt does not really matter for sequences.
*/
List *
-GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
+GetPublicationRelations(Oid pubid, bool sequences, PublicationPartOpt pub_partopt)
{
List *result;
Relation pubrelsrel;
@@ -551,11 +560,21 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
result = NIL;
while (HeapTupleIsValid(tup = systable_getnext(scan)))
{
+ char relkind;
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
- result = GetPubPartitionOptionRelations(result, pub_partopt,
- pubrel->prrelid);
+ relkind = get_rel_relkind(pubrel->prrelid);
+
+ /*
+ * Handle tables and sequences, depending on the combination of relkind
+ * and sequences flag. If there's a mismatch, we ignore the relation.
+ */
+ if (sequences && (relkind == RELKIND_SEQUENCE))
+ result = lappend_oid(result, pubrel->prrelid);
+ else if (!sequences && (relkind != RELKIND_SEQUENCE))
+ result = GetPubPartitionOptionRelations(result, pub_partopt,
+ pubrel->prrelid);
}
systable_endscan(scan);
@@ -605,6 +624,43 @@ GetAllTablesPublications(void)
return result;
}
+/*
+ * Gets list of publication oids for publications marked as FOR ALL SEQUENCES.
+ */
+List *
+GetAllSequencesPublications(void)
+{
+ List *result;
+ Relation rel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications that are marked as for all tables. */
+ rel = table_open(PublicationRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_puballsequences,
+ BTEqualStrategyNumber, F_BOOLEQ,
+ BoolGetDatum(true));
+
+ scan = systable_beginscan(rel, InvalidOid, false,
+ NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Oid oid = ((Form_pg_publication) GETSTRUCT(tup))->oid;
+
+ result = lappend_oid(result, oid);
+ }
+
+ systable_endscan(scan);
+ table_close(rel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of all relation published by FOR ALL TABLES publication(s).
*
@@ -671,28 +727,36 @@ GetAllTablesPublicationRelations(bool pubviaroot)
/*
* Gets the list of schema oids for a publication.
*
- * This should only be used FOR ALL TABLES IN SCHEMA publications.
+ * This should only be used FOR ALL TABLES IN SCHEMA and FOR ALL SEQUENCES
+ * publications.
+ *
+ * 'sequences' determines whether to get FOR TABLE or FOR SEQUENCES schemas
*/
List *
-GetPublicationSchemas(Oid pubid)
+GetPublicationSchemas(Oid pubid, bool sequences)
{
List *result = NIL;
Relation pubschsrel;
- ScanKeyData scankey;
+ ScanKeyData scankey[2];
SysScanDesc scan;
HeapTuple tup;
/* Find all schemas associated with the publication */
pubschsrel = table_open(PublicationNamespaceRelationId, AccessShareLock);
- ScanKeyInit(&scankey,
+ ScanKeyInit(&scankey[0],
Anum_pg_publication_namespace_pnpubid,
BTEqualStrategyNumber, F_OIDEQ,
ObjectIdGetDatum(pubid));
+ ScanKeyInit(&scankey[1],
+ Anum_pg_publication_namespace_pnsequences,
+ BTEqualStrategyNumber, F_BOOLEQ,
+ BoolGetDatum(sequences));
+
scan = systable_beginscan(pubschsrel,
- PublicationNamespacePnnspidPnpubidIndexId,
- true, NULL, 1, &scankey);
+ PublicationNamespacePnnspidPnpubidSeqIndexId,
+ true, NULL, 2, scankey);
while (HeapTupleIsValid(tup = systable_getnext(scan)))
{
Form_pg_publication_namespace pubsch;
@@ -710,6 +774,9 @@ GetPublicationSchemas(Oid pubid)
/*
* Gets the list of publication oids associated with a specified schema.
+ *
+ * XXX probably should handle pnsequences somehow, either by taking a
+ * parameter or returning the flag, somehow
*/
List *
GetSchemaPublications(Oid schemaid)
@@ -738,7 +805,8 @@ GetSchemaPublications(Oid schemaid)
* Get the list of publishable relation oids for a specified schema.
*/
List *
-GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
+GetSchemaPublicationRelations(Oid schemaid, bool sequences,
+ PublicationPartOpt pub_partopt)
{
Relation classRel;
ScanKeyData key[1];
@@ -763,11 +831,19 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
Oid relid = relForm->oid;
char relkind;
+ /* FIXME need to differentiate tables and sequences */
if (!is_publishable_class(relid, relForm))
continue;
relkind = get_rel_relkind(relid);
- if (relkind == RELKIND_RELATION)
+
+ /* Filter by relkind depending on FOR ALL TABLES / SEQUENCES */
+ if ((sequences && relkind != RELKIND_SEQUENCE) ||
+ (!sequences && relkind == RELKIND_SEQUENCE))
+ continue;
+
+ if ((relkind == RELKIND_RELATION) ||
+ (relkind == RELKIND_SEQUENCE))
result = lappend_oid(result, relid);
else if (relkind == RELKIND_PARTITIONED_TABLE)
{
@@ -792,13 +868,14 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
/*
* Gets the list of all relations published by FOR ALL TABLES IN SCHEMA
- * publication.
+ * or FOR ALL SEQUENCES IN SCHEMA publication.
*/
List *
-GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
+GetAllSchemaPublicationRelations(Oid pubid, bool sequences,
+ PublicationPartOpt pub_partopt)
{
List *result = NIL;
- List *pubschemalist = GetPublicationSchemas(pubid);
+ List *pubschemalist = GetPublicationSchemas(pubid, sequences);
ListCell *cell;
foreach(cell, pubschemalist)
@@ -806,13 +883,54 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
Oid schemaid = lfirst_oid(cell);
List *schemaRels = NIL;
- schemaRels = GetSchemaPublicationRelations(schemaid, pub_partopt);
+ schemaRels = GetSchemaPublicationRelations(schemaid, sequences,
+ pub_partopt);
result = list_concat(result, schemaRels);
}
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL TABLES publication(s).
+ *
+ * If the publication publishes partition changes via their respective root
+ * partitioned tables, we must exclude partitions in favor of including the
+ * root partitioned tables.
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -835,10 +953,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -949,10 +1069,12 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
*schemarelids;
relids = GetPublicationRelations(publication->oid,
+ false, /* tables only */
publication->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
schemarelids = GetAllSchemaPublicationRelations(publication->oid,
+ false, /* tables only */
publication->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
@@ -988,3 +1110,71 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ {
+ List *relids,
+ *schemarelids;
+
+ relids = GetPublicationRelations(publication->oid,
+ true, /* sequences */
+ publication->pubviaroot ?
+ PUBLICATION_PART_ROOT :
+ PUBLICATION_PART_LEAF);
+ schemarelids = GetAllSchemaPublicationRelations(publication->oid,
+ true, /* sequences only */
+ publication->pubviaroot ?
+ PUBLICATION_PART_ROOT :
+ PUBLICATION_PART_LEAF);
+ sequences = list_concat_unique_oid(relids, schemarelids);
+ }
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 40b7bca5a96..ffe387be1d3 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -374,6 +374,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPS,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPS.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 16b8661a1b7..db78dc86646 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -67,15 +68,16 @@ typedef struct rf_context
} rf_context;
static List *OpenRelIdList(List *relids);
-static List *OpenTableList(List *tables);
-static void CloseTableList(List *rels);
+static List *OpenRelationList(List *rels);
+static void CloseRelationList(List *rels);
static void LockSchemaList(List *schemalist);
-static void PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
+static void PublicationAddRelations(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt);
-static void PublicationDropTables(Oid pubid, List *rels, bool missing_ok);
-static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
- AlterPublicationStmt *stmt);
-static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static void PublicationDropRelations(Oid pubid, List *rels, bool missing_ok);
+static void PublicationAddSchemas(Oid pubid, List *schemas, bool sequences,
+ bool if_not_exists, AlterPublicationStmt *stmt);
+static void PublicationDropSchemas(Oid pubid, List *schemas, bool sequences, bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
@@ -95,6 +97,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -119,6 +122,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -141,6 +145,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -167,7 +173,9 @@ parse_publication_options(ParseState *pstate,
*/
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas,
+ List **schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -185,12 +193,23 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLES_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA:
+ schemaid = get_namespace_oid(pubobj->name, false);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA:
@@ -204,6 +223,21 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
list_free(search_path);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA:
+ search_path = fetch_search_path(false);
+ if (search_path == NIL) /* nothing valid in search_path? */
+ ereport(ERROR,
+ errcode(ERRCODE_UNDEFINED_SCHEMA),
+ errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+ schemaid = linitial_oid(search_path);
+ list_free(search_path);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
default:
@@ -240,6 +274,14 @@ CheckObjSchemaNotAlreadyInPublication(List *rels, List *schemaidlist,
errdetail("Table \"%s\" in schema \"%s\" is already part of the publication, adding the same schema is not supported.",
RelationGetRelationName(rel),
get_namespace_name(relSchemaId)));
+ else if (checkobjtype == PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA)
+ ereport(ERROR,
+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("cannot add schema \"%s\" to publication",
+ get_namespace_name(relSchemaId)),
+ errdetail("SEquence \"%s\" in schema \"%s\" is already part of the publication, adding the same schema is not supported.",
+ RelationGetRelationName(rel),
+ get_namespace_name(relSchemaId)));
else if (checkobjtype == PUBLICATIONOBJ_TABLE)
ereport(ERROR,
errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -248,6 +290,14 @@ CheckObjSchemaNotAlreadyInPublication(List *rels, List *schemaidlist,
RelationGetRelationName(rel)),
errdetail("Table's schema \"%s\" is already part of the publication or part of the specified schema list.",
get_namespace_name(relSchemaId)));
+ else if (checkobjtype == PUBLICATIONOBJ_SEQUENCE)
+ ereport(ERROR,
+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("cannot add relation \"%s.%s\" to publication",
+ get_namespace_name(relSchemaId),
+ RelationGetRelationName(rel)),
+ errdetail("Sequence's schema \"%s\" is already part of the publication or part of the specified schema list.",
+ get_namespace_name(relSchemaId)));
}
}
}
@@ -625,7 +675,10 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
bool publish_via_partition_root_given;
bool publish_via_partition_root;
AclResult aclresult;
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
/* must have CREATE privilege on database */
@@ -672,6 +725,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
values[Anum_pg_publication_puballtables - 1] =
BoolGetDatum(stmt->for_all_tables);
+ values[Anum_pg_publication_puballsequences - 1] =
+ BoolGetDatum(stmt->for_all_sequences);
values[Anum_pg_publication_pubinsert - 1] =
BoolGetDatum(pubactions.pubinsert);
values[Anum_pg_publication_pubupdate - 1] =
@@ -680,6 +735,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -704,38 +761,62 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
}
else
{
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
- if (list_length(schemaidlist) > 0 && !superuser())
+ if (list_length(tables_schemaidlist) > 0 && !superuser())
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES IN SCHEMA publication"));
- if (list_length(relations) > 0)
+ if (list_length(tables) > 0)
{
List *rels;
- rels = OpenTableList(relations);
- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ rels = OpenRelationList(tables);
+ CheckObjSchemaNotAlreadyInPublication(rels, tables_schemaidlist,
PUBLICATIONOBJ_TABLE);
TransformPubWhereClauses(rels, pstate->p_sourcetext,
publish_via_partition_root);
- PublicationAddTables(puboid, rels, true, NULL);
- CloseTableList(rels);
+ PublicationAddRelations(puboid, rels, true, NULL);
+ CloseRelationList(rels);
}
- if (list_length(schemaidlist) > 0)
+ if (list_length(sequences) > 0)
+ {
+ List *rels;
+
+ rels = OpenRelationList(sequences);
+ CheckObjSchemaNotAlreadyInPublication(rels, sequences_schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddRelations(puboid, rels, true, NULL);
+ CloseRelationList(rels);
+ }
+
+ if (list_length(tables_schemaidlist) > 0)
{
/*
* Schema lock is held until the publication is created to prevent
* concurrent schema deletion.
*/
- LockSchemaList(schemaidlist);
- PublicationAddSchemas(puboid, schemaidlist, true, NULL);
+ LockSchemaList(tables_schemaidlist);
+ PublicationAddSchemas(puboid, tables_schemaidlist, false, true, NULL);
+ }
+
+ if (list_length(sequences_schemaidlist) > 0)
+ {
+ /*
+ * Schema lock is held until the publication is created to prevent
+ * concurrent schema deletion.
+ */
+ LockSchemaList(sequences_schemaidlist);
+ PublicationAddSchemas(puboid, sequences_schemaidlist, true, true, NULL);
}
}
@@ -797,7 +878,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
LockDatabaseObject(PublicationRelationId, pubform->oid, 0,
AccessShareLock);
- root_relids = GetPublicationRelations(pubform->oid,
+ root_relids = GetPublicationRelations(pubform->oid, false,
PUBLICATION_PART_ROOT);
foreach(lc, root_relids)
@@ -856,6 +937,9 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
values[Anum_pg_publication_pubtruncate - 1] = BoolGetDatum(pubactions.pubtruncate);
replaces[Anum_pg_publication_pubtruncate - 1] = true;
+
+ values[Anum_pg_publication_pubsequence - 1] = BoolGetDatum(pubactions.pubsequence);
+ replaces[Anum_pg_publication_pubsequence - 1] = true;
}
if (publish_via_partition_root_given)
@@ -875,7 +959,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
pubform = (Form_pg_publication) GETSTRUCT(tup);
/* Invalidate the relcache. */
- if (pubform->puballtables)
+ if (pubform->puballtables) /* FIXME probably needs to handle puballsequences too? */
{
CacheInvalidateRelcacheAll();
}
@@ -890,7 +974,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
* trees, not just those explicitly mentioned in the publication.
*/
if (root_relids == NIL)
- relids = GetPublicationRelations(pubform->oid,
+ relids = GetPublicationRelations(pubform->oid, false,
PUBLICATION_PART_ALL);
else
{
@@ -904,7 +988,17 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
lfirst_oid(lc));
}
- schemarelids = GetAllSchemaPublicationRelations(pubform->oid,
+ /* tables */
+ schemarelids = GetAllSchemaPublicationRelations(pubform->oid, false,
+ PUBLICATION_PART_ALL);
+ relids = list_concat_unique_oid(relids, schemarelids);
+
+ /* sequences */
+ relids = list_concat_unique_oid(relids,
+ GetPublicationRelations(pubform->oid,
+ true, PUBLICATION_PART_ALL));
+
+ schemarelids = GetAllSchemaPublicationRelations(pubform->oid, true,
PUBLICATION_PART_ALL);
relids = list_concat_unique_oid(relids, schemarelids);
@@ -951,6 +1045,11 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
Oid pubid = pubform->oid;
+ /*
+ * FIXME Do we need to test relkind of the relations? Otherwise we
+ * can do "ADD TABLE sequence" and it'll "work".
+ */
+
/*
* Nothing to do if no objects, except in SET: for that it is quite
* possible that user has not specified any tables in which case we need
@@ -959,7 +1058,7 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
if (!tables && stmt->action != AP_SetObjects)
return;
- rels = OpenTableList(tables);
+ rels = OpenRelationList(tables);
if (stmt->action == AP_AddObjects)
{
@@ -969,19 +1068,19 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
* Check if the relation is member of the existing schema in the
* publication or member of the schema list specified.
*/
- schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid));
+ schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid, false));
CheckObjSchemaNotAlreadyInPublication(rels, schemas,
PUBLICATIONOBJ_TABLE);
TransformPubWhereClauses(rels, queryString, pubform->pubviaroot);
- PublicationAddTables(pubid, rels, false, stmt);
+ PublicationAddRelations(pubid, rels, false, stmt);
}
else if (stmt->action == AP_DropObjects)
- PublicationDropTables(pubid, rels, false);
+ PublicationDropRelations(pubid, rels, false);
else /* AP_SetObjects */
{
- List *oldrelids = GetPublicationRelations(pubid,
+ List *oldrelids = GetPublicationRelations(pubid, false,
PUBLICATION_PART_ROOT);
List *delrels = NIL;
ListCell *oldlc;
@@ -1054,6 +1153,10 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
*/
if (!found)
{
+ /* don't drop sequences (not in list of tables) */
+ if (get_rel_relkind(oldrelid) == RELKIND_SEQUENCE)
+ continue;
+
oldrel = palloc(sizeof(PublicationRelInfo));
oldrel->whereClause = NULL;
oldrel->relation = table_open(oldrelid,
@@ -1063,18 +1166,18 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
}
/* And drop them. */
- PublicationDropTables(pubid, delrels, true);
+ PublicationDropRelations(pubid, delrels, true);
/*
* Don't bother calculating the difference for adding, we'll catch and
* skip existing ones when doing catalog update.
*/
- PublicationAddTables(pubid, rels, true, stmt);
+ PublicationAddRelations(pubid, rels, true, stmt);
- CloseTableList(delrels);
+ CloseRelationList(delrels);
}
- CloseTableList(rels);
+ CloseRelationList(rels);
}
/*
@@ -1084,7 +1187,8 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
*/
static void
AlterPublicationSchemas(AlterPublicationStmt *stmt,
- HeapTuple tup, List *schemaidlist)
+ HeapTuple tup, List *schemaidlist,
+ bool sequences)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
@@ -1106,20 +1210,20 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
List *rels;
List *reloids;
- reloids = GetPublicationRelations(pubform->oid, PUBLICATION_PART_ROOT);
+ reloids = GetPublicationRelations(pubform->oid, sequences, PUBLICATION_PART_ROOT);
rels = OpenRelIdList(reloids);
CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
PUBLICATIONOBJ_TABLES_IN_SCHEMA);
- CloseTableList(rels);
- PublicationAddSchemas(pubform->oid, schemaidlist, false, stmt);
+ CloseRelationList(rels);
+ PublicationAddSchemas(pubform->oid, schemaidlist, sequences, false, stmt);
}
else if (stmt->action == AP_DropObjects)
- PublicationDropSchemas(pubform->oid, schemaidlist, false);
+ PublicationDropSchemas(pubform->oid, schemaidlist, sequences, false);
else /* AP_SetObjects */
{
- List *oldschemaids = GetPublicationSchemas(pubform->oid);
+ List *oldschemaids = GetPublicationSchemas(pubform->oid, sequences);
List *delschemas = NIL;
/* Identify which schemas should be dropped */
@@ -1132,13 +1236,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
LockSchemaList(delschemas);
/* And drop them */
- PublicationDropSchemas(pubform->oid, delschemas, true);
+ PublicationDropSchemas(pubform->oid, delschemas, sequences, true);
/*
* Don't bother calculating the difference for adding, we'll catch and
* skip existing ones when doing catalog update.
*/
- PublicationAddSchemas(pubform->oid, schemaidlist, true, stmt);
+ PublicationAddSchemas(pubform->oid, schemaidlist, sequences, true, stmt);
}
}
@@ -1148,12 +1252,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
*/
static void
CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
- List *tables, List *schemaidlist)
+ List *tables, List *tables_schemaidlist,
+ List *sequences, List *sequences_schemaidlist)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
- schemaidlist && !superuser())
+ (tables_schemaidlist || sequences_schemaidlist) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to add or set schemas")));
@@ -1162,13 +1267,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
* Check that user is allowed to manipulate the publication tables in
* schema
*/
- if (schemaidlist && pubform->puballtables)
+ if (tables_schemaidlist && pubform->puballtables)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables from schema cannot be added to, dropped from, or set on FOR ALL TABLES publications.")));
+ /*
+ * Check that user is allowed to manipulate the publication sequences in
+ * schema
+ */
+ if (sequences_schemaidlist && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
/* Check that user is allowed to manipulate the publication tables. */
if (tables && pubform->puballtables)
ereport(ERROR,
@@ -1176,6 +1292,118 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (sequences && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+ List *sequences, List *schemaidlist)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /*
+ * FIXME Do we need to test relkind of the relations? Otherwise we
+ * can do "ADD SEQUENCE table" and it'll "work".
+ */
+
+ /*
+ * It is quite possible that for the SET case user has not specified any
+ * tables in which case we need to remove all the existing tables.
+ */
+ if (!sequences && stmt->action != AP_SetObjects)
+ return;
+
+ rels = OpenRelationList(sequences);
+
+ if (stmt->action == AP_AddObjects)
+ {
+ List *schemas = NIL;
+
+ /*
+ * Check if the relation is member of the existing schema in the
+ * publication or member of the schema list specified.
+ */
+ schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid, true));
+ CheckObjSchemaNotAlreadyInPublication(rels, schemas,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddRelations(pubid, rels, false, stmt);
+ }
+ else if (stmt->action == AP_DropObjects)
+ PublicationDropRelations(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid, true,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ PublicationRelInfo *newpubrel;
+
+ newpubrel = (PublicationRelInfo *) lfirst(newlc);
+ if (RelationGetRelid(newpubrel->relation) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+ /* Not yet in the list, open it and add to the list */
+ if (!found)
+ {
+ Relation oldrel;
+ PublicationRelInfo *pubrel;
+
+ /* don't drop non-sequences (not in list of sequences) */
+ if (get_rel_relkind(oldrelid) != RELKIND_SEQUENCE)
+ continue;
+
+ /* Wrap relation into PublicationRelInfo */
+ oldrel = table_open(oldrelid, ShareUpdateExclusiveLock);
+
+ pubrel = palloc(sizeof(PublicationRelInfo));
+ pubrel->relation = oldrel;
+ pubrel->whereClause = NULL;
+
+ delrels = lappend(delrels, pubrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropRelations(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddRelations(pubid, rels, true, stmt);
+
+ CloseRelationList(delrels);
+ }
+
+ CloseRelationList(rels);
}
/*
@@ -1213,14 +1441,22 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
AlterPublicationOptions(pstate, stmt, rel, tup);
else
{
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
Oid pubid = pubform->oid;
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
- CheckAlterPublication(stmt, tup, relations, schemaidlist);
+ CheckAlterPublication(stmt, tup,
+ tables, tables_schemaidlist,
+ sequences, sequences_schemaidlist);
heap_freetuple(tup);
@@ -1248,9 +1484,14 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
errmsg("publication \"%s\" does not exist",
stmt->pubname));
- AlterPublicationTables(stmt, tup, relations, schemaidlist,
+ AlterPublicationTables(stmt, tup, tables, tables_schemaidlist,
pstate->p_sourcetext);
- AlterPublicationSchemas(stmt, tup, schemaidlist);
+
+ AlterPublicationSequences(stmt, tup, sequences, sequences_schemaidlist);
+
+ AlterPublicationSchemas(stmt, tup, tables_schemaidlist, false);
+
+ AlterPublicationSchemas(stmt, tup, sequences_schemaidlist, true);
}
/* Cleanup. */
@@ -1318,7 +1559,7 @@ RemovePublicationById(Oid pubid)
pubform = (Form_pg_publication) GETSTRUCT(tup);
/* Invalidate relcache so that publication info is rebuilt. */
- if (pubform->puballtables)
+ if (pubform->puballtables) /* FIXME handle puballsequences too? */
CacheInvalidateRelcacheAll();
CatalogTupleDelete(rel, &tup->t_self);
@@ -1354,6 +1595,7 @@ RemovePublicationSchemaById(Oid psoid)
* partitions.
*/
schemaRels = GetSchemaPublicationRelations(pubsch->pnnspid,
+ pubsch->pnsequences,
PUBLICATION_PART_ALL);
InvalidatePublicationRels(schemaRels);
@@ -1396,17 +1638,17 @@ OpenRelIdList(List *relids)
* add them to a publication.
*/
static List *
-OpenTableList(List *tables)
+OpenRelationList(List *rels)
{
List *relids = NIL;
- List *rels = NIL;
+ List *result = NIL;
ListCell *lc;
List *relids_with_rf = NIL;
/*
* Open, share-lock, and check all the explicitly-specified relations
*/
- foreach(lc, tables)
+ foreach(lc, rels)
{
PublicationTable *t = lfirst_node(PublicationTable, lc);
bool recurse = t->relation->inh;
@@ -1443,7 +1685,7 @@ OpenTableList(List *tables)
pub_rel = palloc(sizeof(PublicationRelInfo));
pub_rel->relation = rel;
pub_rel->whereClause = t->whereClause;
- rels = lappend(rels, pub_rel);
+ result = lappend(result, pub_rel);
relids = lappend_oid(relids, myrelid);
if (t->whereClause)
@@ -1497,7 +1739,7 @@ OpenTableList(List *tables)
pub_rel->relation = rel;
/* child inherits WHERE clause from parent */
pub_rel->whereClause = t->whereClause;
- rels = lappend(rels, pub_rel);
+ result = lappend(result, pub_rel);
relids = lappend_oid(relids, childrelid);
if (t->whereClause)
@@ -1509,14 +1751,14 @@ OpenTableList(List *tables)
list_free(relids);
list_free(relids_with_rf);
- return rels;
+ return result;
}
/*
* Close all relations in the list.
*/
static void
-CloseTableList(List *rels)
+CloseRelationList(List *rels)
{
ListCell *lc;
@@ -1564,7 +1806,7 @@ LockSchemaList(List *schemalist)
* Add listed tables to the publication.
*/
static void
-PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
+PublicationAddRelations(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt)
{
ListCell *lc;
@@ -1598,7 +1840,7 @@ PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
* Remove listed tables from the publication.
*/
static void
-PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
+PublicationDropRelations(Oid pubid, List *rels, bool missing_ok)
{
ObjectAddress obj;
ListCell *lc;
@@ -1638,8 +1880,8 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
* Add listed schemas to the publication.
*/
static void
-PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
- AlterPublicationStmt *stmt)
+PublicationAddSchemas(Oid pubid, List *schemas, bool sequences,
+ bool if_not_exists, AlterPublicationStmt *stmt)
{
ListCell *lc;
@@ -1650,7 +1892,7 @@ PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
Oid schemaid = lfirst_oid(lc);
ObjectAddress obj;
- obj = publication_add_schema(pubid, schemaid, if_not_exists);
+ obj = publication_add_schema(pubid, schemaid, sequences, if_not_exists);
if (stmt)
{
EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
@@ -1666,7 +1908,7 @@ PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
* Remove listed schemas from the publication.
*/
static void
-PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
+PublicationDropSchemas(Oid pubid, List *schemas, bool sequences, bool missing_ok)
{
ObjectAddress obj;
ListCell *lc;
@@ -1676,10 +1918,11 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
{
Oid schemaid = lfirst_oid(lc);
- psid = GetSysCacheOid2(PUBLICATIONNAMESPACEMAP,
+ psid = GetSysCacheOid3(PUBLICATIONNAMESPACEMAP,
Anum_pg_publication_namespace_oid,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pubid));
+ ObjectIdGetDatum(pubid),
+ BoolGetDatum(sequences));
if (!OidIsValid(psid))
{
if (missing_ok)
@@ -1734,6 +1977,13 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
NameStr(form->pubname)),
errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+ if (form->puballsequences && !superuser_arg(newOwnerId))
+ ereport(ERROR,
+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("permission denied to change owner of publication \"%s\"",
+ NameStr(form->pubname)),
+ errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser.")));
+
if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index ab592ce2f15..fe4f21ec438 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -336,6 +336,85 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Reset a sequence to its initial value.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*/
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 3ef6607d246..5beb67e7652 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -85,6 +85,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -496,6 +497,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
char *err;
WalReceiverConn *wrconn;
List *tables;
+ List *sequences;
ListCell *lc;
char table_state;
@@ -534,6 +536,26 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
InvalidXLogRecPtr);
}
+ /*
+ * Get the sequence list from publisher and build local sequence
+ * status info.
+ */
+ sequences = fetch_sequence_list(wrconn, publications);
+ foreach(lc, sequences)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ AddSubscriptionRelState(subid, relid, table_state,
+ InvalidXLogRecPtr);
+ }
+
/*
* If requested, create permanent slot for the subscription. We
* won't use the initial snapshot for anything, so no need to
@@ -706,6 +728,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
{
Oid relid = subrel_local_oids[off];
+ /* XXX ignore sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) == RELKIND_SEQUENCE)
+ continue;
+
if (!bsearch(&relid, pubrel_local_oids,
list_length(pubrel_names), sizeof(Oid), oid_cmp))
{
@@ -797,6 +823,183 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
+ /*
+ * XXX now do the same thing for sequences, maybe before the preceding
+ * block, or earlier?
+ */
+
+ /* Get the table list from publisher. */
+ pubrel_names = fetch_sequence_list(wrconn, sub->publications);
+
+ /* Get local table list. */
+ subrel_states = GetSubscriptionRelations(sub->oid);
+
+ /*
+ * Build qsorted array of local table oids for faster lookup. This can
+ * potentially contain all tables in the database so speed of lookup
+ * is important.
+ */
+ subrel_local_oids = palloc(list_length(subrel_states) * sizeof(Oid));
+ off = 0;
+ foreach(lc, subrel_states)
+ {
+ SubscriptionRelState *relstate = (SubscriptionRelState *) lfirst(lc);
+
+ subrel_local_oids[off++] = relstate->relid;
+ }
+ qsort(subrel_local_oids, list_length(subrel_states),
+ sizeof(Oid), oid_cmp);
+
+ /*
+ * Rels that we want to remove from subscription and drop any slots
+ * and origins corresponding to them.
+ */
+ sub_remove_rels = palloc(list_length(subrel_states) * sizeof(SubRemoveRels));
+
+ /*
+ * Walk over the remote tables and try to match them to locally known
+ * tables. If the table is not known locally create a new state for
+ * it.
+ *
+ * Also builds array of local oids of remote tables for the next step.
+ */
+ off = 0;
+ pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+
+ foreach(lc, pubrel_names)
+ {
+ RangeVar *rv = (RangeVar *) lfirst(lc);
+ Oid relid;
+
+ relid = RangeVarGetRelid(rv, AccessShareLock, false);
+
+ /* Check for supported relkind. */
+ CheckSubscriptionRelkind(get_rel_relkind(relid),
+ rv->schemaname, rv->relname);
+
+ pubrel_local_oids[off++] = relid;
+
+ if (!bsearch(&relid, subrel_local_oids,
+ list_length(subrel_states), sizeof(Oid), oid_cmp))
+ {
+ AddSubscriptionRelState(sub->oid, relid,
+ copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
+ InvalidXLogRecPtr);
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" added to subscription \"%s\"",
+ rv->schemaname, rv->relname, sub->name)));
+ }
+ }
+
+ /*
+ * Next remove state for tables we should not care about anymore using
+ * the data we collected above
+ */
+ qsort(pubrel_local_oids, list_length(pubrel_names),
+ sizeof(Oid), oid_cmp);
+
+ remove_rel_len = 0;
+ for (off = 0; off < list_length(subrel_states); off++)
+ {
+ Oid relid = subrel_local_oids[off];
+
+ /* XXX ignore non-sequences - maybe do this in GetSubscriptionRelations? */
+ if (get_rel_relkind(relid) != RELKIND_SEQUENCE)
+ continue;
+
+ if (!bsearch(&relid, pubrel_local_oids,
+ list_length(pubrel_names), sizeof(Oid), oid_cmp))
+ {
+ char state;
+ XLogRecPtr statelsn;
+
+ /*
+ * Lock pg_subscription_rel with AccessExclusiveLock to
+ * prevent any race conditions with the apply worker
+ * re-launching workers at the same time this code is trying
+ * to remove those tables.
+ *
+ * Even if new worker for this particular rel is restarted it
+ * won't be able to make any progress as we hold exclusive
+ * lock on subscription_rel till the transaction end. It will
+ * simply exit as there is no corresponding rel entry.
+ *
+ * This locking also ensures that the state of rels won't
+ * change till we are done with this refresh operation.
+ */
+ if (!rel)
+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);
+
+ /* Last known rel state. */
+ state = GetSubscriptionRelState(sub->oid, relid, &statelsn);
+
+ sub_remove_rels[remove_rel_len].relid = relid;
+ sub_remove_rels[remove_rel_len++].state = state;
+
+ RemoveSubscriptionRel(sub->oid, relid);
+
+ logicalrep_worker_stop(sub->oid, relid);
+
+ /*
+ * For READY state, we would have already dropped the
+ * tablesync origin.
+ */
+ if (state != SUBREL_STATE_READY)
+ {
+ char originname[NAMEDATALEN];
+
+ /*
+ * Drop the tablesync's origin tracking if exists.
+ *
+ * It is possible that the origin is not yet created for
+ * tablesync worker, this can happen for the states before
+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also
+ * concurrently try to drop the origin and by this time
+ * the origin might be already removed. For these reasons,
+ * passing missing_ok = true.
+ */
+ ReplicationOriginNameForTablesync(sub->oid, relid, originname,
+ sizeof(originname));
+ replorigin_drop_by_name(originname, true, false);
+ }
+
+ ereport(DEBUG1,
+ (errmsg_internal("table \"%s.%s\" removed from subscription \"%s\"",
+ get_namespace_name(get_rel_namespace(relid)),
+ get_rel_name(relid),
+ sub->name)));
+ }
+ }
+
+ /*
+ * Drop the tablesync slots associated with removed tables. This has
+ * to be at the end because otherwise if there is an error while doing
+ * the database operations we won't be able to rollback dropped slots.
+ */
+ for (off = 0; off < remove_rel_len; off++)
+ {
+ if (sub_remove_rels[off].state != SUBREL_STATE_READY &&
+ sub_remove_rels[off].state != SUBREL_STATE_SYNCDONE)
+ {
+ char syncslotname[NAMEDATALEN] = {0};
+
+ /*
+ * For READY/SYNCDONE states we know the tablesync slot has
+ * already been dropped by the tablesync worker.
+ *
+ * For other states, there is no certainty, maybe the slot
+ * does not exist yet. Also, if we fail after removing some of
+ * the slots, next time, it will again try to drop already
+ * dropped slots and fail. For these reasons, we allow
+ * missing_ok = true for the drop.
+ */
+ ReplicationSlotNameForTablesync(sub->oid, sub_remove_rels[off].relid,
+ syncslotname, sizeof(syncslotname));
+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
+ }
+ }
+
}
PG_FINALLY();
{
@@ -1616,6 +1819,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 09f78f22441..cbcf19dd9b7 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -635,7 +635,7 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index d4f8455a2bd..cc936328a32 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4863,6 +4863,7 @@ _copyCreatePublicationStmt(const CreatePublicationStmt *from)
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
COPY_SCALAR_FIELD(for_all_tables);
+ COPY_SCALAR_FIELD(for_all_sequences);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index f1002afe7a0..692cf0e1352 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2346,6 +2346,7 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
COMPARE_SCALAR_FIELD(action);
return true;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a03b33b53bd..9097ac3fabd 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -9701,12 +9701,16 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
*
* CREATE PUBLICATION FOR ALL TABLES [WITH options]
*
+ * CREATE PUBLICATION FOR ALL SEQUENCES [WITH options]
+ *
* CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
*
* pub_obj is one of:
*
* TABLE table [, ...]
+ * SEQUENCE table [, ...]
* ALL TABLES IN SCHEMA schema [, ...]
+ * ALL SEQUENCES IN SCHEMA schema [, ...]
*
*****************************************************************************/
@@ -9726,6 +9730,14 @@ CreatePublicationStmt:
n->for_all_tables = true;
$$ = (Node *)n;
}
+ | CREATE PUBLICATION name FOR ALL SEQUENCES opt_definition
+ {
+ CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
+ n->pubname = $3;
+ n->options = $7;
+ n->for_all_sequences = true;
+ $$ = (Node *)n;
+ }
| CREATE PUBLICATION name FOR pub_obj_list opt_definition
{
CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
@@ -9772,6 +9784,26 @@ PublicationObjSpec:
$$->pubobjtype = PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA;
$$->location = @5;
}
+ | SEQUENCE relation_expr
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+ $$->pubtable = makeNode(PublicationTable);
+ $$->pubtable->relation = $2;
+ }
+ | ALL SEQUENCES IN_P SCHEMA ColId
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA;
+ $$->name = $5;
+ $$->location = @5;
+ }
+ | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA;
+ $$->location = @5;
+ }
| ColId OptWhereClause
{
$$ = makeNode(PublicationObjSpec);
@@ -9839,7 +9871,9 @@ pub_obj_list: PublicationObjSpec
* pub_obj is one of:
*
* TABLE table_name [, ...]
+ * SEQUENCE table_name [, ...]
* ALL TABLES IN SCHEMA schema_name [, ...]
+ * ALL SEQUENCES IN SCHEMA schema_name [, ...]
*
*****************************************************************************/
@@ -10124,6 +10158,12 @@ UnlistenStmt:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index c9b0eeefd7e..3dbe85d61a2 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -648,6 +648,56 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1203,6 +1253,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 1659964571c..315697069dc 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -359,6 +360,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -999,6 +1006,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ List *qual = NIL;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel, &qual);
+
+ /* sequences don't have row filters */
+ Assert(!qual);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ ResetSequence2(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1260,10 +1360,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 8653e1d8402..860c31fa05d 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -144,6 +144,7 @@
#include "catalog/pg_tablespace.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
@@ -1095,6 +1096,57 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by ResetSequence2). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by ResetSequence2 */
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ /* apply the sequence change */
+ ResetSequence2(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2423,6 +2475,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index ea57a0477f0..60bbc524b19 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -53,6 +53,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation relation, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -208,6 +212,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -224,6 +229,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -237,6 +243,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -244,6 +251,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -312,6 +320,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -1440,6 +1458,51 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation relation, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(relation))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, relation);
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ relation,
+ xid,
+ sequence_lsn,
+ transactional,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1725,7 +1788,8 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->schema_sent = false;
entry->streamed_txns = NIL;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->new_slot = NULL;
entry->old_slot = NULL;
memset(entry->exprstate, 0, sizeof(entry->exprstate));
@@ -1751,6 +1815,7 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
bool am_partition = get_rel_relispartition(relid);
char relkind = get_rel_relkind(relid);
List *rel_publications = NIL;
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1779,6 +1844,7 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
/*
* Tuple slots cleanups. (Will be rebuilt later if needed).
@@ -1815,12 +1881,23 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1866,6 +1943,7 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
rel_publications = lappend(rel_publications, pub);
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index fccffce5729..c29d822313d 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5587,7 +5587,15 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
GetSchemaPublications(schemaid));
}
}
- puboids = list_concat_unique_oid(puboids, GetAllTablesPublications());
+
+ /*
+ * Consider also FOR ALL TABLES and FOR ALL SEQUENCES publications,
+ * depending on the relkind of the relation.
+ */
+ if (relation->rd_rel->relkind == RELKIND_SEQUENCE)
+ puboids = list_concat_unique_oid(puboids, GetAllSequencesPublications());
+ else
+ puboids = list_concat_unique_oid(puboids, GetAllTablesPublications());
foreach(lc, puboids)
{
@@ -5606,6 +5614,7 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
pubdesc->pubactions.pubupdate |= pubform->pubupdate;
pubdesc->pubactions.pubdelete |= pubform->pubdelete;
pubdesc->pubactions.pubtruncate |= pubform->pubtruncate;
+ pubdesc->pubactions.pubsequence |= pubform->pubsequence;
/*
* Check if all columns referenced in the filter expression are part of
@@ -5631,6 +5640,8 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
* If we know everything is replicated and the row filter is invalid
* for update and delete, there is no point to check for other
* publications.
+ *
+ * XXX We ignore sequences here, because those don't use row filters.
*/
if (pubdesc->pubactions.pubinsert && pubdesc->pubactions.pubupdate &&
pubdesc->pubactions.pubdelete && pubdesc->pubactions.pubtruncate &&
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index f4e7819f1e2..763aac81596 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -630,12 +630,12 @@ static const struct cachedesc cacheinfo[] = {
64
},
{PublicationNamespaceRelationId, /* PUBLICATIONNAMESPACEMAP */
- PublicationNamespacePnnspidPnpubidIndexId,
- 2,
+ PublicationNamespacePnnspidPnpubidSeqIndexId,
+ 3,
{
Anum_pg_publication_namespace_pnnspid,
Anum_pg_publication_namespace_pnpubid,
- 0,
+ Anum_pg_publication_namespace_pnsequences,
0
},
64
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e69dcf8a484..ef8c6e43c61 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -3788,10 +3788,12 @@ getPublications(Archive *fout, int *numPublications)
int i_pubname;
int i_pubowner;
int i_puballtables;
+ int i_puballsequences;
int i_pubinsert;
int i_pubupdate;
int i_pubdelete;
int i_pubtruncate;
+ int i_pubsequence;
int i_pubviaroot;
int i,
ntups;
@@ -3807,23 +3809,29 @@ getPublications(Archive *fout, int *numPublications)
resetPQExpBuffer(query);
/* Get the publications. */
- if (fout->remoteVersion >= 130000)
+ if (fout->remoteVersion >= 150000)
+ appendPQExpBuffer(query,
+ "SELECT p.tableoid, p.oid, p.pubname, "
+ "p.pubowner, "
+ "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubsequence, p.pubviaroot "
+ "FROM pg_publication p");
+ else if (fout->remoteVersion >= 130000)
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+ "p.puballtables, false AS p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS p.pubsequence, p.pubviaroot "
"FROM pg_publication p");
else if (fout->remoteVersion >= 110000)
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+ "p.puballtables, false AS p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS p.pubsequence, false AS pubviaroot "
"FROM pg_publication p");
else
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+ "p.puballtables, false AS p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS p.pubsequence, false AS pubviaroot "
"FROM pg_publication p");
res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -3835,10 +3843,12 @@ getPublications(Archive *fout, int *numPublications)
i_pubname = PQfnumber(res, "pubname");
i_pubowner = PQfnumber(res, "pubowner");
i_puballtables = PQfnumber(res, "puballtables");
+ i_puballsequences = PQfnumber(res, "puballsequences");
i_pubinsert = PQfnumber(res, "pubinsert");
i_pubupdate = PQfnumber(res, "pubupdate");
i_pubdelete = PQfnumber(res, "pubdelete");
i_pubtruncate = PQfnumber(res, "pubtruncate");
+ i_pubsequence = PQfnumber(res, "pubsequence");
i_pubviaroot = PQfnumber(res, "pubviaroot");
pubinfo = pg_malloc(ntups * sizeof(PublicationInfo));
@@ -3854,6 +3864,8 @@ getPublications(Archive *fout, int *numPublications)
pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
pubinfo[i].puballtables =
(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+ pubinfo[i].puballsequences =
+ (strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
pubinfo[i].pubinsert =
(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
pubinfo[i].pubupdate =
@@ -3862,6 +3874,8 @@ getPublications(Archive *fout, int *numPublications)
(strcmp(PQgetvalue(res, i, i_pubdelete), "t") == 0);
pubinfo[i].pubtruncate =
(strcmp(PQgetvalue(res, i, i_pubtruncate), "t") == 0);
+ pubinfo[i].pubsequence =
+ (strcmp(PQgetvalue(res, i, i_pubsequence), "t") == 0);
pubinfo[i].pubviaroot =
(strcmp(PQgetvalue(res, i, i_pubviaroot), "t") == 0);
@@ -3907,6 +3921,9 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
if (pubinfo->puballtables)
appendPQExpBufferStr(query, " FOR ALL TABLES");
+ if (pubinfo->puballsequences)
+ appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
+
appendPQExpBufferStr(query, " WITH (publish = '");
if (pubinfo->pubinsert)
{
@@ -3941,6 +3958,15 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
first = false;
}
+ if (pubinfo->pubsequence)
+ {
+ if (!first)
+ appendPQExpBufferStr(query, ", ");
+
+ appendPQExpBufferStr(query, "sequence");
+ first = false;
+ }
+
appendPQExpBufferStr(query, "'");
if (pubinfo->pubviaroot)
@@ -3987,6 +4013,7 @@ getPublicationNamespaces(Archive *fout)
int i_oid;
int i_pnpubid;
int i_pnnspid;
+ int i_pnsequences;
int i,
j,
ntups;
@@ -3998,7 +4025,7 @@ getPublicationNamespaces(Archive *fout)
/* Collect all publication membership info. */
appendPQExpBufferStr(query,
- "SELECT tableoid, oid, pnpubid, pnnspid "
+ "SELECT tableoid, oid, pnpubid, pnnspid, pnsequences "
"FROM pg_catalog.pg_publication_namespace");
res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4008,6 +4035,7 @@ getPublicationNamespaces(Archive *fout)
i_oid = PQfnumber(res, "oid");
i_pnpubid = PQfnumber(res, "pnpubid");
i_pnnspid = PQfnumber(res, "pnnspid");
+ i_pnsequences = PQfnumber(res, "pnsequences");
/* this allocation may be more than we need */
pubsinfo = pg_malloc(ntups * sizeof(PublicationSchemaInfo));
@@ -4017,6 +4045,7 @@ getPublicationNamespaces(Archive *fout)
{
Oid pnpubid = atooid(PQgetvalue(res, i, i_pnpubid));
Oid pnnspid = atooid(PQgetvalue(res, i, i_pnnspid));
+ bool pnsequences = (strcmp(PQgetvalue(res, i, i_pnsequences), "t") == 0);
PublicationInfo *pubinfo;
NamespaceInfo *nspinfo;
@@ -4048,6 +4077,7 @@ getPublicationNamespaces(Archive *fout)
pubsinfo[j].dobj.name = nspinfo->dobj.name;
pubsinfo[j].publication = pubinfo;
pubsinfo[j].pubschema = nspinfo;
+ pubsinfo[j].pubsequences = pnsequences;
/* Decide whether we want to dump it */
selectDumpablePublicationObject(&(pubsinfo[j].dobj), fout);
@@ -4181,7 +4211,11 @@ dumpPublicationNamespace(Archive *fout, const PublicationSchemaInfo *pubsinfo)
query = createPQExpBuffer();
appendPQExpBuffer(query, "ALTER PUBLICATION %s ", fmtId(pubinfo->dobj.name));
- appendPQExpBuffer(query, "ADD ALL TABLES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
+
+ if (pubsinfo->pubsequences)
+ appendPQExpBuffer(query, "ADD ALL SEQUENCES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
+ else
+ appendPQExpBuffer(query, "ADD ALL TABLES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
/*
* There is no point in creating drop query as the drop is done by schema
@@ -4214,6 +4248,7 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
TableInfo *tbinfo = pubrinfo->pubtable;
PQExpBuffer query;
char *tag;
+ char *description;
/* Do nothing in data-only dump */
if (dopt->dataOnly)
@@ -4223,10 +4258,22 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
query = createPQExpBuffer();
- appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD TABLE ONLY",
- fmtId(pubinfo->dobj.name));
+ if (tbinfo->relkind == RELKIND_SEQUENCE)
+ {
+ appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD SEQUENCE",
+ fmtId(pubinfo->dobj.name));
+ description = "PUBLICATION SEQUENCE";
+ }
+ else
+ {
+ appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD TABLE ONLY",
+ fmtId(pubinfo->dobj.name));
+ description = "PUBLICATION TABLE";
+ }
+
appendPQExpBuffer(query, " %s",
fmtQualifiedDumpable(tbinfo));
+
if (pubrinfo->pubrelqual)
{
/*
@@ -4249,7 +4296,7 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
ARCHIVE_OPTS(.tag = tag,
.namespace = tbinfo->dobj.namespace->dobj.name,
.owner = pubinfo->rolname,
- .description = "PUBLICATION TABLE",
+ .description = description,
.section = SECTION_POST_DATA,
.createStmt = query->data));
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 997a3b60719..8739af69122 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -615,10 +615,12 @@ typedef struct _PublicationInfo
DumpableObject dobj;
const char *rolname;
bool puballtables;
+ bool puballsequences;
bool pubinsert;
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
bool pubviaroot;
} PublicationInfo;
@@ -643,6 +645,7 @@ typedef struct _PublicationSchemaInfo
DumpableObject dobj;
PublicationInfo *publication;
NamespaceInfo *pubschema;
+ bool pubsequences;
} PublicationSchemaInfo;
/*
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 3e55ff26f82..1bcd845254c 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2358,7 +2358,7 @@ my %tests = (
create_order => 50,
create_sql => 'CREATE PUBLICATION pub1;',
regexp => qr/^
- \QCREATE PUBLICATION pub1 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub1 WITH (publish = 'insert, update, delete, truncate, sequence');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
@@ -2378,7 +2378,18 @@ my %tests = (
create_order => 50,
create_sql => 'CREATE PUBLICATION pub3;',
regexp => qr/^
- \QCREATE PUBLICATION pub3 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub3 WITH (publish = 'insert, update, delete, truncate, sequence');\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ },
+
+ 'CREATE PUBLICATION pub4' => {
+ create_order => 50,
+ create_sql => 'CREATE PUBLICATION pub4
+ FOR ALL SEQUENCES
+ WITH (publish = \'\');',
+ regexp => qr/^
+ \QCREATE PUBLICATION pub4 FOR ALL SEQUENCES WITH (publish = '');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
@@ -2387,7 +2398,7 @@ my %tests = (
create_order => 50,
create_sql => 'CREATE PUBLICATION pub4;',
regexp => qr/^
- \QCREATE PUBLICATION pub4 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub4 WITH (publish = 'insert, update, delete, truncate, sequence');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
@@ -2474,6 +2485,27 @@ my %tests = (
unlike => { exclude_dump_test_schema => 1, },
},
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test' => {
+ create_order => 51,
+ create_sql =>
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test;',
+ regexp => qr/^
+ \QALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test;\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
+ },
+
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public' => {
+ create_order => 52,
+ create_sql =>
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public;',
+ regexp => qr/^
+ \QALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public;\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ },
+
'CREATE SCHEMA public' => {
regexp => qr/^CREATE SCHEMA public;/m,
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index e3382933d98..3603950300e 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1621,28 +1621,19 @@ describeOneTableDetails(const char *schemaname,
if (tableinfo.relkind == RELKIND_SEQUENCE)
{
PGresult *result = NULL;
- printQueryOpt myopt = pset.popt;
- char *footers[2] = {NULL, NULL};
if (pset.sversion >= 100000)
{
printfPQExpBuffer(&buf,
- "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
- " seqstart AS \"%s\",\n"
- " seqmin AS \"%s\",\n"
- " seqmax AS \"%s\",\n"
- " seqincrement AS \"%s\",\n"
- " CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
- " seqcache AS \"%s\"\n",
- gettext_noop("Type"),
- gettext_noop("Start"),
- gettext_noop("Minimum"),
- gettext_noop("Maximum"),
- gettext_noop("Increment"),
+ "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+ " seqstart,\n"
+ " seqmin,\n"
+ " seqmax,\n"
+ " seqincrement,\n"
+ " CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+ " seqcache\n",
gettext_noop("yes"),
- gettext_noop("no"),
- gettext_noop("Cycles?"),
- gettext_noop("Cache"));
+ gettext_noop("no"));
appendPQExpBuffer(&buf,
"FROM pg_catalog.pg_sequence\n"
"WHERE seqrelid = '%s';",
@@ -1651,22 +1642,15 @@ describeOneTableDetails(const char *schemaname,
else
{
printfPQExpBuffer(&buf,
- "SELECT 'bigint' AS \"%s\",\n"
- " start_value AS \"%s\",\n"
- " min_value AS \"%s\",\n"
- " max_value AS \"%s\",\n"
- " increment_by AS \"%s\",\n"
- " CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
- " cache_value AS \"%s\"\n",
- gettext_noop("Type"),
- gettext_noop("Start"),
- gettext_noop("Minimum"),
- gettext_noop("Maximum"),
- gettext_noop("Increment"),
+ "SELECT 'bigint',\n"
+ " start_value,\n"
+ " min_value,\n"
+ " max_value,\n"
+ " increment_by,\n"
+ " CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+ " cache_value\n",
gettext_noop("yes"),
- gettext_noop("no"),
- gettext_noop("Cycles?"),
- gettext_noop("Cache"));
+ gettext_noop("no"));
appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
/* must be separate because fmtId isn't reentrant */
appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1676,6 +1660,51 @@ describeOneTableDetails(const char *schemaname,
if (!res)
goto error_return;
+ numrows = PQntuples(res);
+
+ /* XXX reset to use expanded output for sequences (maybe we should
+ * keep this disabled, just like for tables?) */
+ myopt.expanded = pset.popt.topt.expanded;
+
+ printTableInit(&cont, &myopt, title.data, 7, numrows);
+ printTableInitialized = true;
+
+ printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+ schemaname, relationname);
+
+ printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+ printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+ printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+ /* Generate table cells to be printed */
+ for (i = 0; i < numrows; i++)
+ {
+ /* Type */
+ printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+ /* Start */
+ printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+ /* Minimum */
+ printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+ /* Maximum */
+ printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+ /* Increment */
+ printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+ /* Cycles? */
+ printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+ /* Cache */
+ printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+ }
+
/* Footer information about a sequence */
/* Get the column that owns this sequence */
@@ -1709,29 +1738,63 @@ describeOneTableDetails(const char *schemaname,
switch (PQgetvalue(result, 0, 1)[0])
{
case 'a':
- footers[0] = psprintf(_("Owned by: %s"),
- PQgetvalue(result, 0, 0));
+ printTableAddFooter(&cont,
+ psprintf(_("Owned by: %s"),
+ PQgetvalue(result, 0, 0)));
break;
case 'i':
- footers[0] = psprintf(_("Sequence for identity column: %s"),
- PQgetvalue(result, 0, 0));
+ printTableAddFooter(&cont,
+ psprintf(_("Sequence for identity column: %s"),
+ PQgetvalue(result, 0, 0)));
break;
}
}
PQclear(result);
- printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
- schemaname, relationname);
+ /* print any publications */
+ if (pset.sversion >= 150000)
+ {
+ int tuples = 0;
- myopt.footers = footers;
- myopt.topt.default_footer = false;
- myopt.title = title.data;
- myopt.translate_header = true;
+ printfPQExpBuffer(&buf,
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ " JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
+ " JOIN pg_catalog.pg_class pc ON pc.relnamespace = pn.pnnspid\n"
+ "WHERE pc.oid ='%s' and pn.pnsequences and pg_catalog.pg_relation_is_publishable('%s')\n"
+ "UNION\n"
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ " JOIN pg_catalog.pg_publication_rel pr ON p.oid = pr.prpubid\n"
+ "WHERE pr.prrelid = '%s'\n"
+ "UNION\n"
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+ "ORDER BY 1;",
+ oid, oid, oid, oid);
- printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+ result = PSQLexec(buf.data);
+ if (!result)
+ goto error_return;
+ else
+ tuples = PQntuples(result);
+
+ if (tuples > 0)
+ printTableAddFooter(&cont, _("Publications:"));
+
+ /* Might be an empty set - that's ok */
+ for (i = 0; i < tuples; i++)
+ {
+ printfPQExpBuffer(&buf, " \"%s\"",
+ PQgetvalue(result, i, 0));
+
+ printTableAddFooter(&cont, buf.data);
+ }
+ PQclear(result);
+ }
- if (footers[0])
- free(footers[0]);
+ printTable(&cont, pset.queryFout, false, pset.logfile);
retval = true;
goto error_return; /* not an error, just return early */
@@ -1958,6 +2021,11 @@ describeOneTableDetails(const char *schemaname,
for (i = 0; i < cols; i++)
printTableAddHeader(&cont, headers[i], true, 'l');
+ res = PSQLexec(buf.data);
+ if (!res)
+ goto error_return;
+ numrows = PQntuples(res);
+
/* Generate table cells to be printed */
for (i = 0; i < numrows; i++)
{
@@ -2883,7 +2951,7 @@ describeOneTableDetails(const char *schemaname,
"FROM pg_catalog.pg_publication p\n"
" JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
" JOIN pg_catalog.pg_class pc ON pc.relnamespace = pn.pnnspid\n"
- "WHERE pc.oid ='%s' and pg_catalog.pg_relation_is_publishable('%s')\n"
+ "WHERE pc.oid ='%s' and (not pn.pnsequences) and pg_catalog.pg_relation_is_publishable('%s')\n"
"UNION\n"
"SELECT pubname\n"
" , pg_get_expr(pr.prqual, c.oid)\n"
@@ -4764,7 +4832,7 @@ listSchemas(const char *pattern, bool verbose, bool showSystem)
int i;
printfPQExpBuffer(&buf,
- "SELECT pubname \n"
+ "SELECT pubname, (CASE WHEN pnsequences THEN 'sequences' ELSE 'tables' END) AS pubtype\n"
"FROM pg_catalog.pg_publication p\n"
" JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
" JOIN pg_catalog.pg_namespace n ON n.oid = pn.pnnspid \n"
@@ -4793,8 +4861,9 @@ listSchemas(const char *pattern, bool verbose, bool showSystem)
/* Might be an empty set - that's ok */
for (i = 0; i < pub_schema_tuples; i++)
{
- printfPQExpBuffer(&buf, " \"%s\"",
- PQgetvalue(result, i, 0));
+ printfPQExpBuffer(&buf, " \"%s\" (%s)",
+ PQgetvalue(result, i, 0),
+ PQgetvalue(result, i, 1));
footers[i + 1] = pg_strdup(buf.data);
}
@@ -5799,7 +5868,7 @@ listPublications(const char *pattern)
PQExpBufferData buf;
PGresult *res;
printQueryOpt myopt = pset.popt;
- static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+ static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
if (pset.sversion < 100000)
{
@@ -5813,23 +5882,45 @@ listPublications(const char *pattern)
initPQExpBuffer(&buf);
- printfPQExpBuffer(&buf,
- "SELECT pubname AS \"%s\",\n"
- " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
- " puballtables AS \"%s\",\n"
- " pubinsert AS \"%s\",\n"
- " pubupdate AS \"%s\",\n"
- " pubdelete AS \"%s\"",
- gettext_noop("Name"),
- gettext_noop("Owner"),
- gettext_noop("All tables"),
- gettext_noop("Inserts"),
- gettext_noop("Updates"),
- gettext_noop("Deletes"));
+ if (pset.sversion >= 150000)
+ printfPQExpBuffer(&buf,
+ "SELECT pubname AS \"%s\",\n"
+ " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+ " puballtables AS \"%s\",\n"
+ " puballsequences AS \"%s\",\n"
+ " pubinsert AS \"%s\",\n"
+ " pubupdate AS \"%s\",\n"
+ " pubdelete AS \"%s\"",
+ gettext_noop("Name"),
+ gettext_noop("Owner"),
+ gettext_noop("All tables"),
+ gettext_noop("All sequences"),
+ gettext_noop("Inserts"),
+ gettext_noop("Updates"),
+ gettext_noop("Deletes"));
+ else
+ printfPQExpBuffer(&buf,
+ "SELECT pubname AS \"%s\",\n"
+ " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+ " puballtables AS \"%s\",\n"
+ " pubinsert AS \"%s\",\n"
+ " pubupdate AS \"%s\",\n"
+ " pubdelete AS \"%s\"",
+ gettext_noop("Name"),
+ gettext_noop("Owner"),
+ gettext_noop("All tables"),
+ gettext_noop("Inserts"),
+ gettext_noop("Updates"),
+ gettext_noop("Deletes"));
+
if (pset.sversion >= 110000)
appendPQExpBuffer(&buf,
",\n pubtruncate AS \"%s\"",
gettext_noop("Truncates"));
+ if (pset.sversion >= 150000)
+ appendPQExpBuffer(&buf,
+ ",\n pubsequence AS \"%s\"",
+ gettext_noop("Sequences"));
if (pset.sversion >= 130000)
appendPQExpBuffer(&buf,
",\n pubviaroot AS \"%s\"",
@@ -5915,6 +6006,7 @@ describePublications(const char *pattern)
PGresult *res;
bool has_pubtruncate;
bool has_pubviaroot;
+ bool has_pubsequence;
PQExpBufferData title;
printTableContent cont;
@@ -5931,6 +6023,7 @@ describePublications(const char *pattern)
has_pubtruncate = (pset.sversion >= 110000);
has_pubviaroot = (pset.sversion >= 130000);
+ has_pubsequence = (pset.sversion >= 150000);
initPQExpBuffer(&buf);
@@ -5938,12 +6031,17 @@ describePublications(const char *pattern)
"SELECT oid, pubname,\n"
" pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
" puballtables, pubinsert, pubupdate, pubdelete");
+
if (has_pubtruncate)
appendPQExpBufferStr(&buf,
", pubtruncate");
if (has_pubviaroot)
appendPQExpBufferStr(&buf,
", pubviaroot");
+ if (has_pubsequence)
+ appendPQExpBufferStr(&buf,
+ ", puballsequences, pubsequence");
+
appendPQExpBufferStr(&buf,
"\nFROM pg_catalog.pg_publication\n");
@@ -5984,6 +6082,7 @@ describePublications(const char *pattern)
char *pubid = PQgetvalue(res, i, 0);
char *pubname = PQgetvalue(res, i, 1);
bool puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
+ bool puballsequences = strcmp(PQgetvalue(res, i, 9), "t") == 0;
printTableOpt myopt = pset.popt.topt;
if (has_pubtruncate)
@@ -5991,29 +6090,43 @@ describePublications(const char *pattern)
if (has_pubviaroot)
ncols++;
+ /* sequences have two extra columns (puballsequences, pubsequences) */
+ if (has_pubsequence)
+ ncols += 2;
+
initPQExpBuffer(&title);
printfPQExpBuffer(&title, _("Publication %s"), pubname);
printTableInit(&cont, &myopt, title.data, ncols, nrows);
printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+ if (has_pubsequence)
+ printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
if (has_pubtruncate)
printTableAddHeader(&cont, gettext_noop("Truncates"), true, align);
+ if (has_pubsequence)
+ printTableAddHeader(&cont, gettext_noop("Sequences"), true, align);
if (has_pubviaroot)
printTableAddHeader(&cont, gettext_noop("Via root"), true, align);
- printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false); /* owner */
+ printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false); /* all tables */
+
+ if (has_pubsequence)
+ printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false); /* all sequences */
+
+ printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false); /* insert */
+ printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false); /* update */
+ printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false); /* delete */
if (has_pubtruncate)
- printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false); /* truncate */
+ if (has_pubsequence)
+ printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false); /* sequence */
if (has_pubviaroot)
- printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false); /* via root */
if (!puballtables)
{
@@ -6033,6 +6146,7 @@ describePublications(const char *pattern)
"WHERE c.relnamespace = n.oid\n"
" AND c.oid = pr.prrelid\n"
" AND pr.prpubid = '%s'\n"
+ " AND c.relkind != 'S'\n" /* exclude sequences */
"ORDER BY 1,2", pubid);
if (!addFooterToPublicationDesc(&buf, "Tables:", false, &cont))
goto error_return;
@@ -6044,7 +6158,7 @@ describePublications(const char *pattern)
"SELECT n.nspname\n"
"FROM pg_catalog.pg_namespace n\n"
" JOIN pg_catalog.pg_publication_namespace pn ON n.oid = pn.pnnspid\n"
- "WHERE pn.pnpubid = '%s'\n"
+ "WHERE pn.pnpubid = '%s' AND NOT pn.pnsequences\n"
"ORDER BY 1", pubid);
if (!addFooterToPublicationDesc(&buf, "Tables from schemas:",
true, &cont))
@@ -6052,6 +6166,37 @@ describePublications(const char *pattern)
}
}
+ if (!puballsequences)
+ {
+ /* Get the tables for the specified publication */
+ printfPQExpBuffer(&buf,
+ "SELECT n.nspname, c.relname, NULL\n"
+ "FROM pg_catalog.pg_class c,\n"
+ " pg_catalog.pg_namespace n,\n"
+ " pg_catalog.pg_publication_rel pr\n"
+ "WHERE c.relnamespace = n.oid\n"
+ " AND c.oid = pr.prrelid\n"
+ " AND pr.prpubid = '%s'\n"
+ " AND c.relkind = 'S'\n" /* only sequences */
+ "ORDER BY 1,2", pubid);
+ if (!addFooterToPublicationDesc(&buf, "Sequences:", false, &cont))
+ goto error_return;
+
+ if (pset.sversion >= 150000)
+ {
+ /* Get the schemas for the specified publication */
+ printfPQExpBuffer(&buf,
+ "SELECT n.nspname\n"
+ "FROM pg_catalog.pg_namespace n\n"
+ " JOIN pg_catalog.pg_publication_namespace pn ON n.oid = pn.pnnspid\n"
+ "WHERE pn.pnpubid = '%s' AND pn.pnsequences\n"
+ "ORDER BY 1", pubid);
+ if (!addFooterToPublicationDesc(&buf, "Sequences from schemas:",
+ true, &cont))
+ goto error_return;
+ }
+ }
+
printTable(&cont, pset.queryFout, false, pset.logfile);
printTableCleanup(&cont);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 6957567264a..a8cdfc4cfba 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1782,11 +1782,15 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") ||
(HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") &&
ends_with(prev_wd, ',')))
COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "SEQUENCE") ||
+ (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "SEQUENCE") &&
+ ends_with(prev_wd, ',')))
+ COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_sequences);
/*
* "ALTER PUBLICATION <name> SET TABLE <name> WHERE (" - complete with
* table attributes
@@ -1805,11 +1809,11 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH(",");
/* ALTER PUBLICATION <name> DROP */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
- COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE");
- else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
+ COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES|SEQUENCES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY_PLUS(Query_for_list_of_schemas
" AND nspname NOT LIKE E'pg\\\\_%'",
"CURRENT_SCHEMA");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d8e8715ed1c..699bd0aa3e3 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11540,6 +11540,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index ba72e62e614..f2b4e838c77 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct PublicationDesc
@@ -92,6 +102,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -122,26 +133,30 @@ typedef enum PublicationPartOpt
PUBLICATION_PART_ALL,
} PublicationPartOpt;
-extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
+extern List *GetPublicationRelations(Oid pubid, bool sequences,
+ PublicationPartOpt pub_partopt);
extern List *GetAllTablesPublications(void);
extern List *GetAllTablesPublicationRelations(bool pubviaroot);
-extern List *GetPublicationSchemas(Oid pubid);
+extern List *GetPublicationSchemas(Oid pubid, bool sequence);
extern List *GetSchemaPublications(Oid schemaid);
-extern List *GetSchemaPublicationRelations(Oid schemaid,
+extern List *GetSchemaPublicationRelations(Oid schemaid, bool sequences,
PublicationPartOpt pub_partopt);
-extern List *GetAllSchemaPublicationRelations(Oid puboid,
+extern List *GetAllSchemaPublicationRelations(Oid puboid, bool sequences,
PublicationPartOpt pub_partopt);
extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
Oid relid);
extern Oid GetTopMostAncestorInPublication(Oid puboid, List *ancestors);
+extern List *GetAllSequencesPublications(void);
+extern List *GetAllSequencesPublicationRelations(void);
+
extern bool is_publishable_relation(Relation rel);
extern bool is_schema_publication(Oid pubid);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *pri,
bool if_not_exists);
extern ObjectAddress publication_add_schema(Oid pubid, Oid schemaid,
- bool if_not_exists);
+ bool sequences, bool if_not_exists);
extern Oid get_publication_oid(const char *pubname, bool missing_ok);
extern char *get_publication_name(Oid pubid, bool missing_ok);
diff --git a/src/include/catalog/pg_publication_namespace.h b/src/include/catalog/pg_publication_namespace.h
index e4306da02e7..bbc22dec9f8 100644
--- a/src/include/catalog/pg_publication_namespace.h
+++ b/src/include/catalog/pg_publication_namespace.h
@@ -32,6 +32,7 @@ CATALOG(pg_publication_namespace,8901,PublicationNamespaceRelationId)
Oid oid; /* oid */
Oid pnpubid BKI_LOOKUP(pg_publication); /* Oid of the publication */
Oid pnnspid BKI_LOOKUP(pg_namespace); /* Oid of the schema */
+ bool pnsequences; /* tables or sequences from the schema? */
} FormData_pg_publication_namespace;
/* ----------------
@@ -42,6 +43,6 @@ CATALOG(pg_publication_namespace,8901,PublicationNamespaceRelationId)
typedef FormData_pg_publication_namespace *Form_pg_publication_namespace;
DECLARE_UNIQUE_INDEX_PKEY(pg_publication_namespace_oid_index, 8902, PublicationNamespaceObjectIndexId, on pg_publication_namespace using btree(oid oid_ops));
-DECLARE_UNIQUE_INDEX(pg_publication_namespace_pnnspid_pnpubid_index, 8903, PublicationNamespacePnnspidPnpubidIndexId, on pg_publication_namespace using btree(pnnspid oid_ops, pnpubid oid_ops));
+DECLARE_UNIQUE_INDEX(pg_publication_namespace_pnnspid_pnpubid_pnseq_index, 8903, PublicationNamespacePnnspidPnpubidSeqIndexId, on pg_publication_namespace using btree(pnnspid oid_ops, pnpubid oid_ops, pnsequences bool_ops));
#endif /* PG_PUBLICATION_NAMESPACE_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9fecc41954e..d8c255a7af5 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void ResetSequence2(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 1617702d9d6..7adc3710408 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3663,6 +3663,10 @@ typedef enum PublicationObjSpecType
PUBLICATIONOBJ_TABLES_IN_SCHEMA, /* All tables in schema */
PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA, /* All tables in first element of
* search_path */
+ PUBLICATIONOBJ_SEQUENCE, /* Sequence type */
+ PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA, /* Sequences in schema type */
+ PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA, /* Get the first element of
+ * search_path */
PUBLICATIONOBJ_CONTINUATION /* Continuation of previous type */
} PublicationObjSpecType;
@@ -3682,6 +3686,7 @@ typedef struct CreatePublicationStmt
List *options; /* List of DefElem nodes */
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
} CreatePublicationStmt;
typedef enum AlterPublicationAction
@@ -3705,6 +3710,7 @@ typedef struct AlterPublicationStmt
*/
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
AlterPublicationAction action; /* What action to perform with the given
* objects */
} AlterPublicationStmt;
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 4d2c881644a..415757d8a2d 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -61,6 +61,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -118,6 +119,18 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -230,6 +243,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index eafedd610a5..f4e9f35d09d 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -29,6 +29,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4e191c120ac..620fab87e63 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR: conflicting or redundant options
LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
^
\dRp
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f | t | f | f | f | f
- testpub_default | regress_publication_user | f | f | t | f | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f | f | t | f | f | f | f | f
+ testpub_default | regress_publication_user | f | f | f | t | f | f | f | f
(2 rows)
-ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
+ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete, sequence');
\dRp
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f | t | f | f | f | f
- testpub_default | regress_publication_user | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f | f | t | f | f | f | f | f
+ testpub_default | regress_publication_user | f | f | t | t | t | f | t | f
(2 rows)
--- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
-- should be able to add schema to 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable ADD ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl1"
Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
-- should be able to drop schema from 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable DROP ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl1"
-- should be able to set schema to 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable SET ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test"
@@ -134,10 +134,10 @@ ERROR: relation "testpub_nopk" is not part of the publication
-- should be able to set table to schema publication
ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
\dRp+ testpub_forschema
- Publication testpub_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test.testpub_nopk"
@@ -159,10 +159,10 @@ Publications:
"testpub_foralltables"
\dRp+ testpub_foralltables
- Publication testpub_foralltables
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t | t | t | f | f | f
+ Publication testpub_foralltables
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | t | f | t | t | f | f | f | f
(1 row)
DROP TABLE testpub_tbl2;
@@ -174,24 +174,520 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
RESET client_min_messages;
\dRp+ testpub3
- Publication testpub3
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl3"
"public.testpub_tbl3a"
\dRp+ testpub4
- Publication testpub4
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub4
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl3"
DROP TABLE testpub_tbl3, testpub_tbl3a;
DROP PUBLICATION testpub3, testpub4;
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES WITH (publish = 'sequence');
+RESET client_min_messages;
+ALTER PUBLICATION testpub_forallsequences SET (publish = 'insert, sequence');
+CREATE SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences ADD SEQUENCE testpub_seq2;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't drop from all sequences publication
+ALTER PUBLICATION testpub_forallsequences DROP SEQUENCE testpub_seq2;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences SET SEQUENCE pub_test.testpub_seq1;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't add schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences ADD ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+-- fail - can't drop schema from 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences DROP ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+-- fail - can't set schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences SET ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forsequence FOR SEQUENCE testpub_seq0;
+RESET client_min_messages;
+-- should be able to add schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence ADD ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "public.testpub_seq0"
+Sequences from schemas:
+ "pub_test"
+
+-- should be able to drop schema from 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "public.testpub_seq0"
+
+-- should be able to set schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence SET ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences from schemas:
+ "pub_test"
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forschema FOR ALL SEQUENCES IN SCHEMA pub_test;
+RESET client_min_messages;
+-- fail - can't create publication with schema and table of the same schema
+CREATE PUBLICATION testpub_for_seq_schema FOR ALL SEQUENCES IN SCHEMA pub_test, SEQUENCE pub_test.testpub_seq1;
+ERROR: cannot add relation "pub_test.testpub_seq1" to publication
+DETAIL: Sequence's schema "pub_test" is already part of the publication or part of the specified schema list.
+-- fail - can't add a sequence of the same schema to the schema publication
+ALTER PUBLICATION testpub_forschema ADD SEQUENCE pub_test.testpub_seq1;
+ERROR: cannot add relation "pub_test.testpub_seq1" to publication
+DETAIL: Sequence's schema "pub_test" is already part of the publication or part of the specified schema list.
+-- fail - can't drop a sequence from the schema publication which isn't in the
+-- publication
+ALTER PUBLICATION testpub_forschema DROP SEQUENCE pub_test.testpub_seq1;
+ERROR: relation "testpub_seq1" is not part of the publication
+-- should be able to set sequence to schema publication
+ALTER PUBLICATION testpub_forschema SET SEQUENCE pub_test.testpub_seq1;
+\dRp+ testpub_forschema
+ Publication testpub_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "pub_test.testpub_seq1"
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+ pubname | puballtables | puballsequences
+-------------------------+--------------+-----------------
+ testpub_forallsequences | f | t
+(1 row)
+
+\d+ pub_test.testpub_seq1
+ Sequence "pub_test.testpub_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_forallsequences"
+ "testpub_forschema"
+ "testpub_forsequence"
+
+\dRp+ testpub_forallsequences
+ Publication testpub_forallsequences
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | t | t | f | f | f | t | f
+(1 row)
+
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1, testpub_seq2;
+DROP PUBLICATION testpub_forallsequences, testpub_forsequence, testpub_forschema;
+-- Publication mixing tables and sequences
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_mix;
+RESET client_min_messages;
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE pub_test.testpub_seq2;
+ALTER PUBLICATION testpub_mix ADD SEQUENCE testpub_seq1, TABLE testpub_tbl1;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Sequences:
+ "public.testpub_seq1"
+
+ALTER PUBLICATION testpub_mix ADD ALL SEQUENCES IN SCHEMA pub_test, ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Tables from schemas:
+ "pub_test"
+Sequences:
+ "public.testpub_seq1"
+Sequences from schemas:
+ "pub_test"
+
+ALTER PUBLICATION testpub_mix DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Tables from schemas:
+ "pub_test"
+Sequences:
+ "public.testpub_seq1"
+
+ALTER PUBLICATION testpub_mix DROP ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Sequences:
+ "public.testpub_seq1"
+
+DROP PUBLICATION testpub_mix;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE pub_test.testpub_seq2;
+-- make sure we replicate only the correct relation type
+CREATE SCHEMA pub_test1;
+CREATE SEQUENCE pub_test1.test_seq1;
+CREATE TABLE pub_test1.test_tbl1 (a int primary key, b int);
+CREATE SCHEMA pub_test2;
+CREATE SEQUENCE pub_test2.test_seq2;
+CREATE TABLE pub_test2.test_tbl2 (a int primary key, b int);
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_schemas;
+RESET client_min_messages;
+-- add tables from one schema, sequences from the other
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test2"
+Sequences from schemas:
+ "pub_test1"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- add the other object type from each schema
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test1;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test2;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+ "pub_test2"
+Sequences from schemas:
+ "pub_test1"
+ "pub_test2"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+ "testpub_schemas" (tables)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+ "testpub_schemas" (sequences)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- now drop the object type added first
+ALTER PUBLICATION testpub_schemas DROP ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas DROP ALL SEQUENCES IN SCHEMA pub_test1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+Sequences from schemas:
+ "pub_test2"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+
+-- should fail (publication contains the whole schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test1.test_tbl1;
+ERROR: cannot add relation "pub_test1.test_tbl1" to publication
+DETAIL: Table's schema "pub_test1" is already part of the publication or part of the specified schema list.
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test2.test_seq2;
+ERROR: cannot add relation "pub_test2.test_seq2" to publication
+DETAIL: Sequence's schema "pub_test2" is already part of the publication or part of the specified schema list.
+-- should work (different schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test1.test_seq1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "pub_test2.test_tbl2"
+Tables from schemas:
+ "pub_test1"
+Sequences:
+ "pub_test1.test_seq1"
+Sequences from schemas:
+ "pub_test2"
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- now drop the explicitly added objects again
+ALTER PUBLICATION testpub_schemas DROP TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas DROP SEQUENCE pub_test1.test_seq1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+Sequences from schemas:
+ "pub_test2"
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+
+DROP PUBLICATION testpub_schemas;
+DROP TABLE pub_test1.test_tbl1, pub_test2.test_tbl2;
+DROP SEQUENCE pub_test1.test_seq1, pub_test2.test_seq2;
+DROP SCHEMA pub_test1, pub_test2;
-- Tests for partitioned tables
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_forparted;
@@ -207,10 +703,10 @@ UPDATE testpub_parted1 SET a = 1;
-- only parent is listed as being in publication, not the partition
ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
\dRp+ testpub_forparted
- Publication testpub_forparted
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forparted
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_parted"
@@ -223,10 +719,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
UPDATE testpub_parted1 SET a = 1;
ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
\dRp+ testpub_forparted
- Publication testpub_forparted
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | t
+ Publication testpub_forparted
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | t
Tables:
"public.testpub_parted"
@@ -255,10 +751,10 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -271,10 +767,10 @@ Tables:
ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -290,10 +786,10 @@ Publications:
ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -301,10 +797,10 @@ Tables:
-- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
@@ -337,10 +833,10 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub_syntax1
- Publication testpub_syntax1
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub_syntax1
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl3" WHERE (e < 999)
@@ -350,10 +846,10 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub_syntax2
- Publication testpub_syntax2
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub_syntax2
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -658,10 +1154,10 @@ ERROR: relation "testpub_tbl1" is already member of publication "testpub_fortbl
CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
ERROR: publication "testpub_fortbl" already exists
\dRp+ testpub_fortbl
- Publication testpub_fortbl
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortbl
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test.testpub_nopk"
"public.testpub_tbl1"
@@ -699,10 +1195,10 @@ Publications:
"testpub_fortbl"
\dRp+ testpub_default
- Publication testpub_default
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | f | f
+ Publication testpub_default
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | f | t | f
Tables:
"pub_test.testpub_nopk"
"public.testpub_tbl1"
@@ -780,10 +1276,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
DROP TABLE testpub_parted;
DROP TABLE testpub_tbl1;
\dRp+ testpub_default
- Publication testpub_default
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | f | f
+ Publication testpub_default
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | f | t | f
(1 row)
-- fail - must be owner of publication
@@ -793,20 +1289,20 @@ ERROR: must be owner of publication testpub_default
RESET ROLE;
ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
\dRp testpub_foo
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpub_foo | regress_publication_user | f | f | t | t | t | f | t | f
(1 row)
-- rename back to keep the rest simple
ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
\dRp testpub_default
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpub_default | regress_publication_user2 | f | f | t | t | t | f | t | f
(1 row)
-- adding schemas and tables
@@ -822,19 +1318,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub1_forschema FOR ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
CREATE PUBLICATION testpub2_forschema FOR ALL TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -848,44 +1344,44 @@ CREATE PUBLICATION testpub6_forschema FOR ALL TABLES IN SCHEMA "CURRENT_SCHEMA",
CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
RESET client_min_messages;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"public"
\dRp+ testpub4_forschema
- Publication testpub4_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub4_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
\dRp+ testpub5_forschema
- Publication testpub5_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub5_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
"public"
\dRp+ testpub6_forschema
- Publication testpub6_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub6_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
"public"
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"CURRENT_SCHEMA.CURRENT_SCHEMA"
@@ -919,10 +1415,10 @@ ERROR: schema "testpub_view" does not exist
-- dropping the schema should reflect the change in publication
DROP SCHEMA pub_test3;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -930,20 +1426,20 @@ Tables from schemas:
-- renaming the schema should reflect the change in publication
ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1_renamed"
"pub_test2"
ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -951,10 +1447,10 @@ Tables from schemas:
-- alter publication add schema
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -963,10 +1459,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -975,10 +1471,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA pub_test1;
ERROR: schema "pub_test1" is already member of publication "testpub1_forschema"
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -986,10 +1482,10 @@ Tables from schemas:
-- alter publication drop schema
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -997,10 +1493,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test2;
ERROR: tables from schema "pub_test2" are not part of the publication
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1008,29 +1504,29 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
-- drop all schemas
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
(1 row)
-- alter publication set multiple schema
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA pub_test1, pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1039,10 +1535,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1051,10 +1547,10 @@ Tables from schemas:
-- removing the duplicate schemas
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA pub_test1, pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1124,18 +1620,18 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub3_forschema;
RESET client_min_messages;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
(1 row)
ALTER PUBLICATION testpub3_forschema SET ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1145,20 +1641,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR ALL TABLES IN SCHEMA pub_test1
CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, ALL TABLES IN SCHEMA pub_test1;
RESET client_min_messages;
\dRp+ testpub_forschema_fortable
- Publication testpub_forschema_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forschema_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test2.tbl1"
Tables from schemas:
"pub_test1"
\dRp+ testpub_fortable_forschema
- Publication testpub_fortable_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test2.tbl1"
Tables from schemas:
@@ -1202,40 +1698,85 @@ CREATE SCHEMA sch1;
CREATE SCHEMA sch2;
CREATE TABLE sch1.tbl1 (a int) PARTITION BY RANGE(a);
CREATE TABLE sch2.tbl1_part1 PARTITION OF sch1.tbl1 FOR VALUES FROM (1) to (10);
+CREATE SEQUENCE sch1.seq1;
+CREATE SEQUENCE sch2.seq2;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+-----------
pub | sch1 | tbl1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
DROP PUBLICATION pub;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+(1 row)
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
@@ -1245,14 +1786,26 @@ SELECT * FROM pg_publication_tables;
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+(0 rows)
+
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
DROP TABLE sch2.tbl1_part1;
DROP TABLE sch1.tbl1;
@@ -1268,9 +1821,81 @@ SELECT * FROM pg_publication_tables;
pub | sch1 | tbl1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+(0 rows)
+
+DROP PUBLICATION pub;
+-- Schema publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR ALL SEQUENCES IN SCHEMA sch2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
+ALTER PUBLICATION pub DROP SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
RESET client_min_messages;
DROP PUBLICATION pub;
DROP TABLE sch1.tbl1;
+DROP SEQUENCE sch1.seq1, sch2.seq2;
DROP SCHEMA sch1 cascade;
DROP SCHEMA sch2 cascade;
RESET SESSION AUTHORIZATION;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index ac468568a1a..ef328d356bb 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1429,6 +1429,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gps.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/regress/sql/object_address.sql b/src/test/regress/sql/object_address.sql
index 2f40156eb48..f90afad8045 100644
--- a/src/test/regress/sql/object_address.sql
+++ b/src/test/regress/sql/object_address.sql
@@ -143,7 +143,6 @@ SELECT pg_get_object_address('publication', '{one}', '{}');
SELECT pg_get_object_address('publication', '{one,two}', '{}');
SELECT pg_get_object_address('subscription', '{one}', '{}');
SELECT pg_get_object_address('subscription', '{one,two}', '{}');
-
-- test successful cases
WITH objects (type, name, args) AS (VALUES
('table', '{addr_nsp, gentable}'::text[], '{}'::text[]),
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 5457c56b33f..af665395e17 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -27,7 +27,7 @@ CREATE PUBLICATION testpub_xxx WITH (publish_via_partition_root = 'true', publis
\dRp
-ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
+ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete, sequence');
\dRp
@@ -104,6 +104,179 @@ RESET client_min_messages;
DROP TABLE testpub_tbl3, testpub_tbl3a;
DROP PUBLICATION testpub3, testpub4;
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES WITH (publish = 'sequence');
+RESET client_min_messages;
+ALTER PUBLICATION testpub_forallsequences SET (publish = 'insert, sequence');
+
+CREATE SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences ADD SEQUENCE testpub_seq2;
+-- fail - can't drop from all sequences publication
+ALTER PUBLICATION testpub_forallsequences DROP SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences SET SEQUENCE pub_test.testpub_seq1;
+
+-- fail - can't add schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences ADD ALL SEQUENCES IN SCHEMA pub_test;
+-- fail - can't drop schema from 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences DROP ALL SEQUENCES IN SCHEMA pub_test;
+-- fail - can't set schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences SET ALL SEQUENCES IN SCHEMA pub_test;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forsequence FOR SEQUENCE testpub_seq0;
+RESET client_min_messages;
+-- should be able to add schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence ADD ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+-- should be able to drop schema from 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+-- should be able to set schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence SET ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forschema FOR ALL SEQUENCES IN SCHEMA pub_test;
+RESET client_min_messages;
+-- fail - can't create publication with schema and table of the same schema
+CREATE PUBLICATION testpub_for_seq_schema FOR ALL SEQUENCES IN SCHEMA pub_test, SEQUENCE pub_test.testpub_seq1;
+-- fail - can't add a sequence of the same schema to the schema publication
+ALTER PUBLICATION testpub_forschema ADD SEQUENCE pub_test.testpub_seq1;
+-- fail - can't drop a sequence from the schema publication which isn't in the
+-- publication
+ALTER PUBLICATION testpub_forschema DROP SEQUENCE pub_test.testpub_seq1;
+-- should be able to set sequence to schema publication
+ALTER PUBLICATION testpub_forschema SET SEQUENCE pub_test.testpub_seq1;
+\dRp+ testpub_forschema
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+\d+ pub_test.testpub_seq1
+\dRp+ testpub_forallsequences
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1, testpub_seq2;
+DROP PUBLICATION testpub_forallsequences, testpub_forsequence, testpub_forschema;
+
+-- Publication mixing tables and sequences
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_mix;
+RESET client_min_messages;
+
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE pub_test.testpub_seq2;
+
+ALTER PUBLICATION testpub_mix ADD SEQUENCE testpub_seq1, TABLE testpub_tbl1;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix ADD ALL SEQUENCES IN SCHEMA pub_test, ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix DROP ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+DROP PUBLICATION testpub_mix;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE pub_test.testpub_seq2;
+
+
+-- make sure we replicate only the correct relation type
+CREATE SCHEMA pub_test1;
+CREATE SEQUENCE pub_test1.test_seq1;
+CREATE TABLE pub_test1.test_tbl1 (a int primary key, b int);
+
+CREATE SCHEMA pub_test2;
+CREATE SEQUENCE pub_test2.test_seq2;
+CREATE TABLE pub_test2.test_tbl2 (a int primary key, b int);
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_schemas;
+RESET client_min_messages;
+
+-- add tables from one schema, sequences from the other
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test1;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- add the other object type from each schema
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test1;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test2;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- now drop the object type added first
+ALTER PUBLICATION testpub_schemas DROP ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas DROP ALL SEQUENCES IN SCHEMA pub_test1;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- should fail (publication contains the whole schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test1.test_tbl1;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test2.test_seq2;
+
+-- should work (different schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test1.test_seq1;
+
+\dRp+ testpub_schemas
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- now drop the explicitly added objects again
+ALTER PUBLICATION testpub_schemas DROP TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas DROP SEQUENCE pub_test1.test_seq1;
+
+\dRp+ testpub_schemas
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+DROP PUBLICATION testpub_schemas;
+DROP TABLE pub_test1.test_tbl1, pub_test2.test_tbl2;
+DROP SEQUENCE pub_test1.test_seq1, pub_test2.test_seq2;
+DROP SCHEMA pub_test1, pub_test2;
+
-- Tests for partitioned tables
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_forparted;
@@ -717,32 +890,51 @@ CREATE SCHEMA sch1;
CREATE SCHEMA sch2;
CREATE TABLE sch1.tbl1 (a int) PARTITION BY RANGE(a);
CREATE TABLE sch2.tbl1_part1 PARTITION OF sch1.tbl1 FOR VALUES FROM (1) to (10);
+CREATE SEQUENCE sch1.seq1;
+CREATE SEQUENCE sch2.seq2;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
DROP TABLE sch2.tbl1_part1;
@@ -755,10 +947,36 @@ CREATE TABLE sch1.tbl1_part3 (a int) PARTITION BY RANGE(a);
ALTER TABLE sch1.tbl1 ATTACH PARTITION sch1.tbl1_part3 FOR VALUES FROM (20) to (30);
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Schema publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR ALL SEQUENCES IN SCHEMA sch2;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub DROP SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
RESET client_min_messages;
DROP PUBLICATION pub;
DROP TABLE sch1.tbl1;
+DROP SEQUENCE sch1.seq1, sch2.seq2;
DROP SCHEMA sch1 cascade;
DROP SCHEMA sch2 cascade;
diff --git a/src/test/subscription/t/029_sequences.pl b/src/test/subscription/t/029_sequences.pl
new file mode 100644
index 00000000000..cdd7f7f344b
--- /dev/null
+++ b/src/test/subscription/t/029_sequences.pl
@@ -0,0 +1,202 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub WITH (slot_name = seq_sub_slot)"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a rolled-back transaction - the rollback
+# does not wait for the replication, so we could see any intermediate state
+# so do something else after the test, to ensure we wait for everything
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+ INSERT INTO seq_test VALUES (-1);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'check replicated sequence values on subscriber');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the new sequence in a transaction, and roll it back - the rollback
+# does not wait for the replication, so we could see any intermediate state
+# so do something else after the test, to ensure we wait for everything
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+ INSERT INTO seq_test VALUES (-1);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '231|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'check replicated sequence values on subscriber');
+
+
+done_testing();
--
2.34.1
On Tue, Mar 8, 2022 at 11:59 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
On 3/7/22 22:25, Tomas Vondra wrote:
Interesting. I can think of one reason that might cause this - we log
the first sequence increment after a checkpoint. So if a checkpoint
happens in an unfortunate place, there'll be an extra WAL record. On
slow / busy machines that's quite possible, I guess.I've tweaked the checkpoint_interval to make checkpoints more aggressive
(set it to 1s), and it seems my hunch was correct - it produces failures
exactly like this one. The best fix probably is to just disable decoding
of sequences in those tests that are not aimed at testing sequence decoding.I've pushed a fix for this, adding "include-sequences=0" to a couple
test_decoding tests, which were failing with concurrent checkpoints.Unfortunately, I realized we have a similar issue in the "sequences"
tests too :-( Imagine you do a series of sequence increments, e.g.SELECT nextval('s') FROM generate_sequences(1,100);
If there's a concurrent checkpoint, this may add an extra WAL record,
affecting the decoded output (and also the data stored in the sequence
relation itself). Not sure what to do about this ...
I am also not sure what to do for it but maybe if in some way we can
increase checkpoint timeout or other parameters for these tests then
it would reduce the chances of such failures. The other idea could be
to perform checkpoint before the start of tests to reduce the
possibility of another checkpoint.
--
With Regards,
Amit Kapila.
On Fri, Mar 11, 2022 at 5:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Tue, Mar 8, 2022 at 11:59 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 3/7/22 22:25, Tomas Vondra wrote:
Interesting. I can think of one reason that might cause this - we log
the first sequence increment after a checkpoint. So if a checkpoint
happens in an unfortunate place, there'll be an extra WAL record. On
slow / busy machines that's quite possible, I guess.I've tweaked the checkpoint_interval to make checkpoints more aggressive
(set it to 1s), and it seems my hunch was correct - it produces failures
exactly like this one. The best fix probably is to just disable decoding
of sequences in those tests that are not aimed at testing sequence decoding.I've pushed a fix for this, adding "include-sequences=0" to a couple
test_decoding tests, which were failing with concurrent checkpoints.Unfortunately, I realized we have a similar issue in the "sequences"
tests too :-( Imagine you do a series of sequence increments, e.g.SELECT nextval('s') FROM generate_sequences(1,100);
If there's a concurrent checkpoint, this may add an extra WAL record,
affecting the decoded output (and also the data stored in the sequence
relation itself). Not sure what to do about this ...I am also not sure what to do for it but maybe if in some way we can
increase checkpoint timeout or other parameters for these tests then
it would reduce the chances of such failures. The other idea could be
to perform checkpoint before the start of tests to reduce the
possibility of another checkpoint.
One more thing, I notice while checking the commit for this feature is
that the below include seems to be out of order:
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -42,6 +42,7 @@
#include "replication/reorderbuffer.h"
#include "replication/snapbuild.h"
#include "storage/standby.h"
+#include "commands/sequence.h"
--
With Regards,
Amit Kapila.
On 3/11/22 12:34, Amit Kapila wrote:
On Tue, Mar 8, 2022 at 11:59 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 3/7/22 22:25, Tomas Vondra wrote:
Interesting. I can think of one reason that might cause this - we log
the first sequence increment after a checkpoint. So if a checkpoint
happens in an unfortunate place, there'll be an extra WAL record. On
slow / busy machines that's quite possible, I guess.I've tweaked the checkpoint_interval to make checkpoints more aggressive
(set it to 1s), and it seems my hunch was correct - it produces failures
exactly like this one. The best fix probably is to just disable decoding
of sequences in those tests that are not aimed at testing sequence decoding.I've pushed a fix for this, adding "include-sequences=0" to a couple
test_decoding tests, which were failing with concurrent checkpoints.Unfortunately, I realized we have a similar issue in the "sequences"
tests too :-( Imagine you do a series of sequence increments, e.g.SELECT nextval('s') FROM generate_sequences(1,100);
If there's a concurrent checkpoint, this may add an extra WAL record,
affecting the decoded output (and also the data stored in the sequence
relation itself). Not sure what to do about this ...I am also not sure what to do for it but maybe if in some way we can
increase checkpoint timeout or other parameters for these tests then
it would reduce the chances of such failures. The other idea could be
to perform checkpoint before the start of tests to reduce the
possibility of another checkpoint.
Yeah, I had the same ideas, but I'm not sure I like any of them. I doubt
we want to make checkpoints extremely rare, and even if we do that it'll
still fail on slow machines (e.g. with valgrind, clobber cache etc.).
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Further review (based on 20220310 patch):
doc/src/sgml/ref/create_publication.sgml | 3 +
For the clauses added to the synopsis, descriptions should be added
below. See attached patch for a start.
src/backend/commands/sequence.c | 79 ++
There is quite a bit of overlap between ResetSequence() and
ResetSequence2(), but I couldn't see a good way to combine them that
genuinely saves code and complexity. So maybe it's ok.
Actually, ResetSequence2() is not really "reset", it's just "set".
Maybe pick a different function name.
src/backend/commands/subscriptioncmds.c | 272 +++++++
The code added in AlterSubscription_refresh() seems to be entirely
copy-and-paste from the tables case. I think this could be combined
by concatenating the lists from fetch_table_list() and
fetch_sequence_list() and looping over it once. The same also applies
to CreateSubscription(), although the code duplication is smaller
there.
This in turn means that fetch_table_list() and fetch_sequence_list()
can be combined, so that you don't actually need any extensive new
code in CreateSubscription() and AlterSubscription_refresh() for
sequences. This could go on, you can combine more of the underlying
code, like pg_publication_tables and pg_publication_sequences and so
on.
src/backend/replication/logical/proto.c | 52 ++
The documentation of the added protocol message needs to be added to
the documentation. See attached patch for a start.
The sequence message does not contain the sequence Oid, unlike the
relation message. Would that be good to add?
src/backend/replication/logical/worker.c | 56 ++
Maybe the Asserts in apply_handle_sequence() should be elogs. These
are checking what is sent over the network, so we don't want a
bad/evil peer able to trigger asserts. And in non-assert builds these
conditions would be unchecked.
src/backend/replication/pgoutput/pgoutput.c | 82 +-
I find the the in get_rel_sync_entry() confusing. You add a section for
if (!publish && is_sequence)
but then shouldn't the code below that be something like
if (!publish && !is_sequence)
src/bin/pg_dump/t/002_pg_dump.pl | 38 +-
This adds a new publication "pub4", but the tests already contain a
"pub4". I'm not sure why this even works, but perhaps the new one
shold be "pub5", unless there is a deeper meaning.
src/include/catalog/pg_publication_namespace.h | 3 +-
I don't like how the distinction between table and sequence is done
using a bool field. That affects also the APIs in pg_publication.c
and publicationcmds.c especially. There is a lot of unadorned "true"
and "false" being passed around that isn't very clear, and it all
appears to originate at this catalog. I think we could use a char
field here that uses the relkind constants. That would also make the
code in pg_publication.c etc. slightly clearer.
See attached patch for more small tweaks.
Your patch still contains a number of XXX and FIXME comments, which in
my assessment are all more or less correct, so I didn't comment on those
separately.
Other than that, this seems pretty good.
Earlier in the thread I commented on some aspects of the new grammar
(e.g., do we need FOR ALL SEQUENCES?). I think this would be useful to
review again after all the new logical replication patches are in. I
don't want to hold up this patch for that at this point.
Attachments:
0001-fixup-Add-support-for-decoding-sequences-to-built-in.patchtext/plain; charset=UTF-8; name=0001-fixup-Add-support-for-decoding-sequences-to-built-in.patchDownload
From bdded82050841d3b71308ce82110efd21d99ea53 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <peter@eisentraut.org>
Date: Sun, 13 Mar 2022 07:38:46 +0100
Subject: [PATCH] fixup! Add support for decoding sequences to built-in
replication
---
doc/src/sgml/protocol.sgml | 119 ++++++++++++++++++++++
doc/src/sgml/ref/alter_publication.sgml | 2 +-
doc/src/sgml/ref/create_publication.sgml | 42 +++++---
src/backend/catalog/pg_publication.c | 8 +-
src/backend/commands/subscriptioncmds.c | 2 +-
src/backend/parser/gram.y | 14 ---
src/backend/replication/logical/worker.c | 2 +-
src/bin/pg_dump/pg_dump.c | 6 +-
src/test/regress/expected/publication.out | 2 +-
src/test/regress/sql/object_address.sql | 1 +
src/test/regress/sql/publication.sql | 2 +-
src/test/subscription/t/029_sequences.pl | 14 +--
12 files changed, 165 insertions(+), 49 deletions(-)
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index 9178c779ba..49c05e1866 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -7055,6 +7055,125 @@ <title>Logical Replication Message Formats</title>
</listitem>
</varlistentry>
+<varlistentry id="protocol-logicalrep-message-formats-Sequence">
+<term>
+Sequence
+</term>
+<listitem>
+<para>
+
+<variablelist>
+<varlistentry>
+<term>
+ Byte1('X')
+</term>
+<listitem>
+<para>
+ Identifies the message as a sequence message.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ Int32 (TransactionId)
+</term>
+<listitem>
+<para>
+ Xid of the transaction (only present for streamed transactions).
+ This field is available since protocol version 2.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ Int8(0)
+</term>
+<listitem>
+<para>
+ Flags; currently unused.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ Int64 (XLogRecPtr)
+</term>
+<listitem>
+<para>
+ The LSN of FIXME.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ String
+</term>
+<listitem>
+<para>
+ Namespace (empty string for <literal>pg_catalog</literal>).
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ String
+</term>
+<listitem>
+<para>
+ Relation name.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int8
+</term>
+<listitem>
+<para>
+ 1 if the sequence update is transactions, 0 otherwise.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int64
+</term>
+<listitem>
+<para>
+ <structfield>last_value</structfield> value of the sequence.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int64
+</term>
+<listitem>
+<para>
+ <structfield>log_cnt</structfield> value of the sequence.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int8
+</term>
+<listitem>
+<para>
+ <structfield>is_called</structfield> value of the sequence.
+</para>
+</listitem>
+</varlistentry>
+
+</variablelist>
+</para>
+</listitem>
+</varlistentry>
+
<varlistentry id="protocol-logicalrep-message-formats-Type">
<term>
Type
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index 36c9a5f438..5dacb732b6 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,7 @@
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
- SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index f72318e97d..286529e749 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -66,6 +66,20 @@ <title>Parameters</title>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>FOR SEQUENCE</literal></term>
+ <listitem>
+ <para>
+ Specifies a list of sequences to add to the publication.
+ </para>
+
+ <para>
+ Specifying a sequence that is part of a schema specified by <literal>FOR
+ ALL SEQUENCES IN SCHEMA</literal> is not supported.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>FOR TABLE</literal></term>
<listitem>
@@ -111,26 +125,28 @@ <title>Parameters</title>
</varlistentry>
<varlistentry>
+ <term><literal>FOR ALL SEQUENCES</literal></term>
<term><literal>FOR ALL TABLES</literal></term>
<listitem>
<para>
- Marks the publication as one that replicates changes for all tables in
- the database, including tables created in the future.
+ Marks the publication as one that replicates changes for all sequences/tables in
+ the database, including sequences/tables created in the future.
</para>
</listitem>
</varlistentry>
<varlistentry>
+ <term><literal>FOR ALL SEQUENCES IN SCHEMA</literal></term>
<term><literal>FOR ALL TABLES IN SCHEMA</literal></term>
<listitem>
<para>
- Marks the publication as one that replicates changes for all tables in
- the specified list of schemas, including tables created in the future.
+ Marks the publication as one that replicates changes for all sequences/tables in
+ the specified list of schemas, including sequences/tables created in the future.
</para>
<para>
- Specifying a schema along with a table which belongs to the specified
- schema using <literal>FOR TABLE</literal> is not supported.
+ Specifying a schema along with a sequence/table which belongs to the specified
+ schema using <literal>FOR SEQUENCE</literal>/<literal>FOR TABLE</literal> is not supported.
</para>
<para>
@@ -205,10 +221,9 @@ <title>Parameters</title>
<title>Notes</title>
<para>
- If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
- <literal>FOR ALL TABLES IN SCHEMA</literal> are not specified, then the
- publication starts out with an empty set of tables. That is useful if
- tables or schemas are to be added later.
+ If <literal>FOR TABLE</literal>, <literal>FOR SEQUENCE</literal>, etc. is
+ not specified, then the publication starts out with an empty set of tables
+ and sequences. That is useful if objects are to be added later.
</para>
<para>
@@ -223,10 +238,9 @@ <title>Notes</title>
</para>
<para>
- To add a table to a publication, the invoking user must have ownership
- rights on the table. The <command>FOR ALL TABLES</command> and
- <command>FOR ALL TABLES IN SCHEMA</command> clauses require the invoking
- user to be a superuser.
+ To add a table or sequence to a publication, the invoking user must have
+ ownership rights on the table or sequence. The <command>FOR ALL
+ ...</command> clauses require the invoking user to be a superuser.
</para>
<para>
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index d866e8a9b2..8e26e0cee2 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -636,7 +636,7 @@ GetAllSequencesPublications(void)
SysScanDesc scan;
HeapTuple tup;
- /* Find all publications that are marked as for all tables. */
+ /* Find all publications that are marked as for all sequences. */
rel = table_open(PublicationRelationId, AccessShareLock);
ScanKeyInit(&scankey,
@@ -892,11 +892,7 @@ GetAllSchemaPublicationRelations(Oid pubid, bool sequences,
}
/*
- * Gets list of all relation published by FOR ALL TABLES publication(s).
- *
- * If the publication publishes partition changes via their respective root
- * partitioned tables, we must exclude partitions in favor of including the
- * root partitioned tables.
+ * Gets list of all relation published by FOR ALL SEQUENCES publication(s).
*/
List *
GetAllSequencesPublicationRelations(void)
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 5beb67e765..1c70c4369a 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -1859,7 +1859,7 @@ fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
if (res->status != WALRCV_OK_TUPLES)
ereport(ERROR,
- (errmsg("could not receive list of replicated tables from the publisher: %s",
+ (errmsg("could not receive list of replicated sequences from the publisher: %s",
res->err)));
/* Process tables. */
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9097ac3fab..6ff0ddd62b 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -9705,13 +9705,6 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
*
* CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
*
- * pub_obj is one of:
- *
- * TABLE table [, ...]
- * SEQUENCE table [, ...]
- * ALL TABLES IN SCHEMA schema [, ...]
- * ALL SEQUENCES IN SCHEMA schema [, ...]
- *
*****************************************************************************/
CreatePublicationStmt:
@@ -9868,13 +9861,6 @@ pub_obj_list: PublicationObjSpec
*
* ALTER PUBLICATION name SET pub_obj [, ...]
*
- * pub_obj is one of:
- *
- * TABLE table_name [, ...]
- * SEQUENCE table_name [, ...]
- * ALL TABLES IN SCHEMA schema_name [, ...]
- * ALL SEQUENCES IN SCHEMA schema_name [, ...]
- *
*****************************************************************************/
AlterPublicationStmt:
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 860c31fa05..1282c15f92 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -142,9 +142,9 @@
#include "catalog/pg_subscription.h"
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_tablespace.h"
+#include "commands/sequence.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
-#include "commands/sequence.h"
#include "commands/trigger.h"
#include "executor/executor.h"
#include "executor/execPartition.h"
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index ef8c6e43c6..35a8fc7631 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -3819,19 +3819,19 @@ getPublications(Archive *fout, int *numPublications)
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, false AS p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS p.pubsequence, p.pubviaroot "
+ "p.puballtables, false AS puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubsequence, p.pubviaroot "
"FROM pg_publication p");
else if (fout->remoteVersion >= 110000)
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, false AS p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS p.pubsequence, false AS pubviaroot "
+ "p.puballtables, false AS puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubsequence, false AS pubviaroot "
"FROM pg_publication p");
else
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, false AS p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS p.pubsequence, false AS pubviaroot "
+ "p.puballtables, false AS puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubsequence, false AS pubviaroot "
"FROM pg_publication p");
res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 620fab87e6..92c50b13ec 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -262,7 +262,7 @@ Sequences from schemas:
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_forschema FOR ALL SEQUENCES IN SCHEMA pub_test;
RESET client_min_messages;
--- fail - can't create publication with schema and table of the same schema
+-- fail - can't create publication with schema and sequence of the same schema
CREATE PUBLICATION testpub_for_seq_schema FOR ALL SEQUENCES IN SCHEMA pub_test, SEQUENCE pub_test.testpub_seq1;
ERROR: cannot add relation "pub_test.testpub_seq1" to publication
DETAIL: Sequence's schema "pub_test" is already part of the publication or part of the specified schema list.
diff --git a/src/test/regress/sql/object_address.sql b/src/test/regress/sql/object_address.sql
index f90afad804..2f40156eb4 100644
--- a/src/test/regress/sql/object_address.sql
+++ b/src/test/regress/sql/object_address.sql
@@ -143,6 +143,7 @@ CREATE STATISTICS addr_nsp.gentable_stat ON a, b FROM addr_nsp.gentable;
SELECT pg_get_object_address('publication', '{one,two}', '{}');
SELECT pg_get_object_address('subscription', '{one}', '{}');
SELECT pg_get_object_address('subscription', '{one,two}', '{}');
+
-- test successful cases
WITH objects (type, name, args) AS (VALUES
('table', '{addr_nsp, gentable}'::text[], '{}'::text[]),
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index af665395e1..5043c4bbba 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -144,7 +144,7 @@ CREATE PUBLICATION testpub_forsequence FOR SEQUENCE testpub_seq0;
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_forschema FOR ALL SEQUENCES IN SCHEMA pub_test;
RESET client_min_messages;
--- fail - can't create publication with schema and table of the same schema
+-- fail - can't create publication with schema and sequence of the same schema
CREATE PUBLICATION testpub_for_seq_schema FOR ALL SEQUENCES IN SCHEMA pub_test, SEQUENCE pub_test.testpub_seq1;
-- fail - can't add a sequence of the same schema to the schema publication
ALTER PUBLICATION testpub_forschema ADD SEQUENCE pub_test.testpub_seq1;
diff --git a/src/test/subscription/t/029_sequences.pl b/src/test/subscription/t/029_sequences.pl
index cdd7f7f344..9ae3c03d7d 100644
--- a/src/test/subscription/t/029_sequences.pl
+++ b/src/test/subscription/t/029_sequences.pl
@@ -46,7 +46,7 @@
"ALTER PUBLICATION seq_pub ADD SEQUENCE s");
$node_subscriber->safe_psql('postgres',
- "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub WITH (slot_name = seq_sub_slot)"
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub"
);
$node_publisher->wait_for_catchup('seq_sub');
@@ -73,7 +73,7 @@
));
is( $result, '132|0|t',
- 'check replicated sequence values on subscriber');
+ 'initial test data replicated');
# advance the sequence in a rolled-back transaction - the rollback
@@ -96,7 +96,7 @@
));
is( $result, '231|0|t',
- 'check replicated sequence values on subscriber');
+ 'advance sequence in rolled-back transaction');
# create a new sequence and roll it back - should not be replicated, due to
@@ -119,7 +119,7 @@
));
is( $result, '1|0|f',
- 'check replicated sequence values on subscriber');
+ 'create new sequence and roll it back');
# create a new sequence, advance it in a rolled-back transaction, but commit
@@ -150,7 +150,7 @@
));
is( $result, '132|0|t',
- 'check replicated sequence values on subscriber');
+ 'create sequence, advance it in rolled-back transaction, but commit the create');
# advance the new sequence in a transaction, and roll it back - the rollback
@@ -173,7 +173,7 @@
));
is( $result, '231|0|t',
- 'check replicated sequence values on subscriber');
+ 'advance the new sequence in a transaction and roll it back');
# advance the sequence in a subtransaction - the subtransaction gets rolled
@@ -196,7 +196,7 @@
));
is( $result, '330|0|t',
- 'check replicated sequence values on subscriber');
+ 'advance sequence in a subtransaction');
done_testing();
--
2.35.1
On 3/13/22 07:45, Peter Eisentraut wrote:
Further review (based on 20220310 patch):
doc/src/sgml/ref/create_publication.sgml | 3 +
For the clauses added to the synopsis, descriptions should be added
below. See attached patch for a start.
Thanks. I'm not sure what other improvements do you think this .sgml
file needs?
src/backend/commands/sequence.c | 79 ++
There is quite a bit of overlap between ResetSequence() and
ResetSequence2(), but I couldn't see a good way to combine them that
genuinely saves code and complexity. So maybe it's ok.Actually, ResetSequence2() is not really "reset", it's just "set".
Maybe pick a different function name.
Yeah, good point. I think the functions are sufficiently different, and
attempting to remove the publications is unlikely to be an improvement.
But you're right "ResetSequence2" is not a great name, so I've changed
it to "SetSequence".
src/backend/commands/subscriptioncmds.c | 272 +++++++
The code added in AlterSubscription_refresh() seems to be entirely
copy-and-paste from the tables case. I think this could be combined
by concatenating the lists from fetch_table_list() and
fetch_sequence_list() and looping over it once. The same also applies
to CreateSubscription(), although the code duplication is smaller
there.This in turn means that fetch_table_list() and fetch_sequence_list()
can be combined, so that you don't actually need any extensive new
code in CreateSubscription() and AlterSubscription_refresh() for
sequences. This could go on, you can combine more of the underlying
code, like pg_publication_tables and pg_publication_sequences and so
on.
I've removed the duplicated code in both places, so that it processes
only a single list, which is a combination of tables and sequences
(using list_concat). For CreateSubscription it was trivial, because the
code was simple and perfect copy. For _refresh there was a minor
difference, but I think it was actually entirely unnecessary when
processing the combined list. But this will need more testing.
I'm not sure about the last bit, though. How would you combine code for
pg_publication_tables and pg_publication_sequences, etc?
src/backend/replication/logical/proto.c | 52 ++
The documentation of the added protocol message needs to be added to
the documentation. See attached patch for a start.
OK. I've resolved the FIXME for LSN. Not sure what else is needed?
The sequence message does not contain the sequence Oid, unlike the
relation message. Would that be good to add?
I don't think we need to do that. For relations we do that because it
serves as an identifier in RelationSyncCache, and it links the various
messages to it. For sequences we don't need that - the schema is fixed.
Or do you see a practical reason to add the OID?
src/backend/replication/logical/worker.c | 56 ++
Maybe the Asserts in apply_handle_sequence() should be elogs. These
are checking what is sent over the network, so we don't want a
bad/evil peer able to trigger asserts. And in non-assert builds these
conditions would be unchecked.
I'll think about it, but AFAIK we don't really assume evil peers.
src/backend/replication/pgoutput/pgoutput.c | 82 +-
I find the the in get_rel_sync_entry() confusing. You add a section for
if (!publish && is_sequence)
but then shouldn't the code below that be something like
if (!publish && !is_sequence)
Hmm, maybe. But I think there's actually a bigger issue - this does not
seem to be dealing with pg_publication_namespace.pnsequences correctly.
That is, we we don't differentiate which schemas include tables and
which schemas include sequences. Interestingly, no tests fail. I'll take
a closer look tomorrow.
src/bin/pg_dump/t/002_pg_dump.pl | 38 +-
This adds a new publication "pub4", but the tests already contain a
"pub4". I'm not sure why this even works, but perhaps the new one
shold be "pub5", unless there is a deeper meaning.
I agree, pub5 it is. But it's interesting it does not fail even with the
duplicate name. Strange.
src/include/catalog/pg_publication_namespace.h | 3 +-
I don't like how the distinction between table and sequence is done
using a bool field. That affects also the APIs in pg_publication.c
and publicationcmds.c especially. There is a lot of unadorned "true"
and "false" being passed around that isn't very clear, and it all
appears to originate at this catalog. I think we could use a char
field here that uses the relkind constants. That would also make the
code in pg_publication.c etc. slightly clearer.
I thought about using relkind, but it does not work all that nicely
because we have multiple relkinds for a table (because of partitioned
tables). So I found that confusing.
Maybe we should just use 'r' for any table, in this catalog?
See attached patch for more small tweaks.
Your patch still contains a number of XXX and FIXME comments, which in
my assessment are all more or less correct, so I didn't comment on those
separately.
Yeah, I plan to look at those next.
Other than that, this seems pretty good.
Earlier in the thread I commented on some aspects of the new grammar
(e.g., do we need FOR ALL SEQUENCES?). I think this would be useful to
review again after all the new logical replication patches are in. I
don't want to hold up this patch for that at this point.
I'm not particularly attached to the grammar, but I don't see any reason
not to have mostly the same grammar/options as for tables.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Add-support-for-decoding-sequences-to-built-20220313.patchtext/x-patch; charset=UTF-8; name=0001-Add-support-for-decoding-sequences-to-built-20220313.patchDownload
From 444a46bbd878e4fa17cd10ee63c43dd4f5e0d140 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Thu, 10 Mar 2022 22:36:43 +0100
Subject: [PATCH] Add support for decoding sequences to built-in replication
---
doc/src/sgml/catalogs.sgml | 81 ++
doc/src/sgml/protocol.sgml | 119 +++
doc/src/sgml/ref/alter_publication.sgml | 24 +-
doc/src/sgml/ref/alter_subscription.sgml | 3 +-
doc/src/sgml/ref/create_publication.sgml | 45 +-
src/backend/catalog/objectaddress.c | 5 +-
src/backend/catalog/pg_publication.c | 232 +++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 380 +++++--
src/backend/commands/sequence.c | 79 ++
src/backend/commands/subscriptioncmds.c | 102 +-
src/backend/executor/execReplication.c | 2 +-
src/backend/nodes/copyfuncs.c | 1 +
src/backend/nodes/equalfuncs.c | 1 +
src/backend/parser/gram.y | 46 +-
src/backend/replication/logical/proto.c | 52 +
src/backend/replication/logical/tablesync.c | 118 ++-
src/backend/replication/logical/worker.c | 56 +
src/backend/replication/pgoutput/pgoutput.c | 82 +-
src/backend/utils/cache/relcache.c | 13 +-
src/backend/utils/cache/syscache.c | 6 +-
src/bin/pg_dump/pg_dump.c | 65 +-
src/bin/pg_dump/pg_dump.h | 3 +
src/bin/pg_dump/t/002_pg_dump.pl | 40 +-
src/bin/psql/describe.c | 287 +++--
src/bin/psql/tab-complete.c | 12 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 25 +-
.../catalog/pg_publication_namespace.h | 3 +-
src/include/commands/sequence.h | 1 +
src/include/nodes/parsenodes.h | 6 +
src/include/replication/logicalproto.h | 19 +
src/include/replication/pgoutput.h | 1 +
src/test/regress/expected/publication.out | 999 ++++++++++++++----
src/test/regress/expected/rules.out | 8 +
src/test/regress/sql/publication.sql | 220 +++-
src/test/subscription/t/029_sequences.pl | 202 ++++
37 files changed, 2931 insertions(+), 422 deletions(-)
create mode 100644 src/test/subscription/t/029_sequences.pl
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 83987a99045..324fc1050d3 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6263,6 +6263,16 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
Reference to schema
</para></entry>
</row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pnsequences</structfield> <type>bool</type>
+ Determines whether to include tables or sequences from this schema.
+ </para>
+ <para>
+ Reference to schema
+ </para></entry>
+ </row>
</tbody>
</tgroup>
</table>
@@ -9560,6 +9570,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11395,6 +11410,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index 9178c779ba9..4507c4cb791 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -7055,6 +7055,125 @@ Relation
</listitem>
</varlistentry>
+<varlistentry id="protocol-logicalrep-message-formats-Sequence">
+<term>
+Sequence
+</term>
+<listitem>
+<para>
+
+<variablelist>
+<varlistentry>
+<term>
+ Byte1('X')
+</term>
+<listitem>
+<para>
+ Identifies the message as a sequence message.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ Int32 (TransactionId)
+</term>
+<listitem>
+<para>
+ Xid of the transaction (only present for streamed transactions).
+ This field is available since protocol version 2.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ Int8(0)
+</term>
+<listitem>
+<para>
+ Flags; currently unused.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ Int64 (XLogRecPtr)
+</term>
+<listitem>
+<para>
+ The LSN of the sequence increment.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ String
+</term>
+<listitem>
+<para>
+ Namespace (empty string for <literal>pg_catalog</literal>).
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ String
+</term>
+<listitem>
+<para>
+ Relation name.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int8
+</term>
+<listitem>
+<para>
+ 1 if the sequence update is transactions, 0 otherwise.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int64
+</term>
+<listitem>
+<para>
+ <structfield>last_value</structfield> value of the sequence.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int64
+</term>
+<listitem>
+<para>
+ <structfield>log_cnt</structfield> value of the sequence.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int8
+</term>
+<listitem>
+<para>
+ <structfield>is_called</structfield> value of the sequence.
+</para>
+</listitem>
+</varlistentry>
+
+</variablelist>
+</para>
+</listitem>
+</varlistentry>
+
<varlistentry id="protocol-logicalrep-message-formats-Type">
<term>
Type
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index 32b75f6c78e..5dacb732b69 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -58,7 +60,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -122,6 +135,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><replaceable class="parameter">schema_name</replaceable></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 0d6f064f58d..264c9c5a10f 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -147,7 +147,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ replication of tables and sequences that were added to the subscribed-to publications
since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -168,6 +168,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<para>
Previously subscribed tables are not copied, even if a table's row
filter <literal>WHERE</literal> clause has since been modified.
+ Previously-subscribed sequences are not copied either.
</para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 4979b9b646d..c8a9173597c 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -23,13 +23,16 @@ PostgreSQL documentation
<synopsis>
CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
[ FOR ALL TABLES
+ | FOR ALL SEQUENCES
| FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
[ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -107,27 +110,43 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>FOR SEQUENCE</literal></term>
+ <listitem>
+ <para>
+ Specifies a list of sequences to add to the publication.
+ </para>
+
+ <para>
+ Specifying a sequence that is part of a schema specified by <literal>FOR
+ ALL SEQUENCES IN SCHEMA</literal> is not supported.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>FOR ALL TABLES</literal></term>
+ <term><literal>FOR ALL SEQUENCES</literal></term>
<listitem>
<para>
- Marks the publication as one that replicates changes for all tables in
- the database, including tables created in the future.
+ Marks the publication as one that replicates changes for all sequences/tables in
+ the database, including sequences/tables created in the future.
</para>
</listitem>
</varlistentry>
<varlistentry>
+ <term><literal>FOR ALL SEQUENCES IN SCHEMA</literal></term>
<term><literal>FOR ALL TABLES IN SCHEMA</literal></term>
<listitem>
<para>
- Marks the publication as one that replicates changes for all tables in
- the specified list of schemas, including tables created in the future.
+ Marks the publication as one that replicates changes for all sequences/tables in
+ the specified list of schemas, including sequences/tables created in the future.
</para>
<para>
- Specifying a schema along with a table which belongs to the specified
- schema using <literal>FOR TABLE</literal> is not supported.
+ Specifying a schema along with a sequence/table which belongs to the specified
+ schema using <literal>FOR SEQUENCE</literal>/<literal>FOR TABLE</literal> is not supported.
</para>
<para>
@@ -202,10 +221,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
<title>Notes</title>
<para>
- If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
- <literal>FOR ALL TABLES IN SCHEMA</literal> are not specified, then the
- publication starts out with an empty set of tables. That is useful if
- tables or schemas are to be added later.
+ If <literal>FOR TABLE</literal>, <literal>FOR SEQUENCE</literal>, etc. is
+ not specified, then the publication starts out with an empty set of tables
+ and sequences. That is useful if objects are to be added later.
</para>
<para>
@@ -220,10 +238,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
</para>
<para>
- To add a table to a publication, the invoking user must have ownership
- rights on the table. The <command>FOR ALL TABLES</command> and
- <command>FOR ALL TABLES IN SCHEMA</command> clauses require the invoking
- user to be a superuser.
+ To add a table or sequence to a publication, the invoking user must have
+ ownership rights on the table or sequence. The <command>FOR ALL
+ ...</command> clauses require the invoking user to be a superuser.
</para>
<para>
diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c
index f30c742d48f..301eeb7caca 100644
--- a/src/backend/catalog/objectaddress.c
+++ b/src/backend/catalog/objectaddress.c
@@ -1976,10 +1976,11 @@ get_object_address_publication_schema(List *object, bool missing_ok)
/* Find the publication schema mapping in syscache */
address.objectId =
- GetSysCacheOid2(PUBLICATIONNAMESPACEMAP,
+ GetSysCacheOid3(PUBLICATIONNAMESPACEMAP,
Anum_pg_publication_namespace_oid,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pub->oid));
+ ObjectIdGetDatum(pub->oid),
+ BoolGetDatum(false)); /* FIXME */
if (!OidIsValid(address.objectId) && !missing_ok)
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 25998fbb39b..8e26e0cee24 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -54,7 +54,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -131,7 +132,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -205,7 +207,7 @@ is_schema_publication(Oid pubid)
ObjectIdGetDatum(pubid));
scan = systable_beginscan(pubschsrel,
- PublicationNamespacePnnspidPnpubidIndexId,
+ PublicationNamespacePnnspidPnpubidSeqIndexId,
true, NULL, 1, &scankey);
tup = systable_getnext(scan);
result = HeapTupleIsValid(tup);
@@ -419,7 +421,7 @@ publication_add_relation(Oid pubid, PublicationRelInfo *pri,
* Insert new publication / schema mapping.
*/
ObjectAddress
-publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
+publication_add_schema(Oid pubid, Oid schemaid, bool sequences, bool if_not_exists)
{
Relation rel;
HeapTuple tup;
@@ -438,9 +440,10 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
* duplicates, it's here just to provide nicer error message in common
* case. The real protection is the unique key on the catalog.
*/
- if (SearchSysCacheExists2(PUBLICATIONNAMESPACEMAP,
+ if (SearchSysCacheExists3(PUBLICATIONNAMESPACEMAP,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pubid)))
+ ObjectIdGetDatum(pubid),
+ BoolGetDatum(sequences)))
{
table_close(rel, RowExclusiveLock);
@@ -466,6 +469,8 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
ObjectIdGetDatum(pubid);
values[Anum_pg_publication_namespace_pnnspid - 1] =
ObjectIdGetDatum(schemaid);
+ values[Anum_pg_publication_namespace_pnsequences - 1] =
+ BoolGetDatum(sequences);
tup = heap_form_tuple(RelationGetDescr(rel), values, nulls);
@@ -492,6 +497,7 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
* partitions.
*/
schemaRels = GetSchemaPublicationRelations(schemaid,
+ sequences,
PUBLICATION_PART_ALL);
InvalidatePublicationRels(schemaRels);
@@ -525,11 +531,14 @@ GetRelationPublications(Oid relid)
/*
* Gets list of relation oids for a publication.
*
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE / FOR SEQUENCE publications, the FOR
+ * ALL TABLES / SEQUENCES should use GetAllTablesPublicationRelations()
+ * and GetAllSequencesPublicationRelations().
+ *
+ * XXX It's a bit weird the pub_partopt does not really matter for sequences.
*/
List *
-GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
+GetPublicationRelations(Oid pubid, bool sequences, PublicationPartOpt pub_partopt)
{
List *result;
Relation pubrelsrel;
@@ -551,11 +560,21 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
result = NIL;
while (HeapTupleIsValid(tup = systable_getnext(scan)))
{
+ char relkind;
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
- result = GetPubPartitionOptionRelations(result, pub_partopt,
- pubrel->prrelid);
+ relkind = get_rel_relkind(pubrel->prrelid);
+
+ /*
+ * Handle tables and sequences, depending on the combination of relkind
+ * and sequences flag. If there's a mismatch, we ignore the relation.
+ */
+ if (sequences && (relkind == RELKIND_SEQUENCE))
+ result = lappend_oid(result, pubrel->prrelid);
+ else if (!sequences && (relkind != RELKIND_SEQUENCE))
+ result = GetPubPartitionOptionRelations(result, pub_partopt,
+ pubrel->prrelid);
}
systable_endscan(scan);
@@ -605,6 +624,43 @@ GetAllTablesPublications(void)
return result;
}
+/*
+ * Gets list of publication oids for publications marked as FOR ALL SEQUENCES.
+ */
+List *
+GetAllSequencesPublications(void)
+{
+ List *result;
+ Relation rel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications that are marked as for all sequences. */
+ rel = table_open(PublicationRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_puballsequences,
+ BTEqualStrategyNumber, F_BOOLEQ,
+ BoolGetDatum(true));
+
+ scan = systable_beginscan(rel, InvalidOid, false,
+ NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Oid oid = ((Form_pg_publication) GETSTRUCT(tup))->oid;
+
+ result = lappend_oid(result, oid);
+ }
+
+ systable_endscan(scan);
+ table_close(rel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of all relation published by FOR ALL TABLES publication(s).
*
@@ -671,28 +727,36 @@ GetAllTablesPublicationRelations(bool pubviaroot)
/*
* Gets the list of schema oids for a publication.
*
- * This should only be used FOR ALL TABLES IN SCHEMA publications.
+ * This should only be used FOR ALL TABLES IN SCHEMA and FOR ALL SEQUENCES
+ * publications.
+ *
+ * 'sequences' determines whether to get FOR TABLE or FOR SEQUENCES schemas
*/
List *
-GetPublicationSchemas(Oid pubid)
+GetPublicationSchemas(Oid pubid, bool sequences)
{
List *result = NIL;
Relation pubschsrel;
- ScanKeyData scankey;
+ ScanKeyData scankey[2];
SysScanDesc scan;
HeapTuple tup;
/* Find all schemas associated with the publication */
pubschsrel = table_open(PublicationNamespaceRelationId, AccessShareLock);
- ScanKeyInit(&scankey,
+ ScanKeyInit(&scankey[0],
Anum_pg_publication_namespace_pnpubid,
BTEqualStrategyNumber, F_OIDEQ,
ObjectIdGetDatum(pubid));
+ ScanKeyInit(&scankey[1],
+ Anum_pg_publication_namespace_pnsequences,
+ BTEqualStrategyNumber, F_BOOLEQ,
+ BoolGetDatum(sequences));
+
scan = systable_beginscan(pubschsrel,
- PublicationNamespacePnnspidPnpubidIndexId,
- true, NULL, 1, &scankey);
+ PublicationNamespacePnnspidPnpubidSeqIndexId,
+ true, NULL, 2, scankey);
while (HeapTupleIsValid(tup = systable_getnext(scan)))
{
Form_pg_publication_namespace pubsch;
@@ -710,6 +774,9 @@ GetPublicationSchemas(Oid pubid)
/*
* Gets the list of publication oids associated with a specified schema.
+ *
+ * XXX probably should handle pnsequences somehow, either by taking a
+ * parameter or returning the flag, somehow
*/
List *
GetSchemaPublications(Oid schemaid)
@@ -738,7 +805,8 @@ GetSchemaPublications(Oid schemaid)
* Get the list of publishable relation oids for a specified schema.
*/
List *
-GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
+GetSchemaPublicationRelations(Oid schemaid, bool sequences,
+ PublicationPartOpt pub_partopt)
{
Relation classRel;
ScanKeyData key[1];
@@ -763,11 +831,19 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
Oid relid = relForm->oid;
char relkind;
+ /* FIXME need to differentiate tables and sequences */
if (!is_publishable_class(relid, relForm))
continue;
relkind = get_rel_relkind(relid);
- if (relkind == RELKIND_RELATION)
+
+ /* Filter by relkind depending on FOR ALL TABLES / SEQUENCES */
+ if ((sequences && relkind != RELKIND_SEQUENCE) ||
+ (!sequences && relkind == RELKIND_SEQUENCE))
+ continue;
+
+ if ((relkind == RELKIND_RELATION) ||
+ (relkind == RELKIND_SEQUENCE))
result = lappend_oid(result, relid);
else if (relkind == RELKIND_PARTITIONED_TABLE)
{
@@ -792,13 +868,14 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
/*
* Gets the list of all relations published by FOR ALL TABLES IN SCHEMA
- * publication.
+ * or FOR ALL SEQUENCES IN SCHEMA publication.
*/
List *
-GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
+GetAllSchemaPublicationRelations(Oid pubid, bool sequences,
+ PublicationPartOpt pub_partopt)
{
List *result = NIL;
- List *pubschemalist = GetPublicationSchemas(pubid);
+ List *pubschemalist = GetPublicationSchemas(pubid, sequences);
ListCell *cell;
foreach(cell, pubschemalist)
@@ -806,13 +883,50 @@ GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
Oid schemaid = lfirst_oid(cell);
List *schemaRels = NIL;
- schemaRels = GetSchemaPublicationRelations(schemaid, pub_partopt);
+ schemaRels = GetSchemaPublicationRelations(schemaid, sequences,
+ pub_partopt);
result = list_concat(result, schemaRels);
}
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -835,10 +949,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -949,10 +1065,12 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
*schemarelids;
relids = GetPublicationRelations(publication->oid,
+ false, /* tables only */
publication->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
schemarelids = GetAllSchemaPublicationRelations(publication->oid,
+ false, /* tables only */
publication->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
@@ -988,3 +1106,71 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ {
+ List *relids,
+ *schemarelids;
+
+ relids = GetPublicationRelations(publication->oid,
+ true, /* sequences */
+ publication->pubviaroot ?
+ PUBLICATION_PART_ROOT :
+ PUBLICATION_PART_LEAF);
+ schemarelids = GetAllSchemaPublicationRelations(publication->oid,
+ true, /* sequences only */
+ publication->pubviaroot ?
+ PUBLICATION_PART_ROOT :
+ PUBLICATION_PART_LEAF);
+ sequences = list_concat_unique_oid(relids, schemarelids);
+ }
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 40b7bca5a96..ffe387be1d3 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -374,6 +374,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPS,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPS.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 16b8661a1b7..db78dc86646 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -67,15 +68,16 @@ typedef struct rf_context
} rf_context;
static List *OpenRelIdList(List *relids);
-static List *OpenTableList(List *tables);
-static void CloseTableList(List *rels);
+static List *OpenRelationList(List *rels);
+static void CloseRelationList(List *rels);
static void LockSchemaList(List *schemalist);
-static void PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
+static void PublicationAddRelations(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt);
-static void PublicationDropTables(Oid pubid, List *rels, bool missing_ok);
-static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
- AlterPublicationStmt *stmt);
-static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static void PublicationDropRelations(Oid pubid, List *rels, bool missing_ok);
+static void PublicationAddSchemas(Oid pubid, List *schemas, bool sequences,
+ bool if_not_exists, AlterPublicationStmt *stmt);
+static void PublicationDropSchemas(Oid pubid, List *schemas, bool sequences, bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
@@ -95,6 +97,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -119,6 +122,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -141,6 +145,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -167,7 +173,9 @@ parse_publication_options(ParseState *pstate,
*/
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas,
+ List **schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -185,12 +193,23 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLES_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA:
+ schemaid = get_namespace_oid(pubobj->name, false);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA:
@@ -204,6 +223,21 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
list_free(search_path);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA:
+ search_path = fetch_search_path(false);
+ if (search_path == NIL) /* nothing valid in search_path? */
+ ereport(ERROR,
+ errcode(ERRCODE_UNDEFINED_SCHEMA),
+ errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+ schemaid = linitial_oid(search_path);
+ list_free(search_path);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
default:
@@ -240,6 +274,14 @@ CheckObjSchemaNotAlreadyInPublication(List *rels, List *schemaidlist,
errdetail("Table \"%s\" in schema \"%s\" is already part of the publication, adding the same schema is not supported.",
RelationGetRelationName(rel),
get_namespace_name(relSchemaId)));
+ else if (checkobjtype == PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA)
+ ereport(ERROR,
+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("cannot add schema \"%s\" to publication",
+ get_namespace_name(relSchemaId)),
+ errdetail("SEquence \"%s\" in schema \"%s\" is already part of the publication, adding the same schema is not supported.",
+ RelationGetRelationName(rel),
+ get_namespace_name(relSchemaId)));
else if (checkobjtype == PUBLICATIONOBJ_TABLE)
ereport(ERROR,
errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -248,6 +290,14 @@ CheckObjSchemaNotAlreadyInPublication(List *rels, List *schemaidlist,
RelationGetRelationName(rel)),
errdetail("Table's schema \"%s\" is already part of the publication or part of the specified schema list.",
get_namespace_name(relSchemaId)));
+ else if (checkobjtype == PUBLICATIONOBJ_SEQUENCE)
+ ereport(ERROR,
+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("cannot add relation \"%s.%s\" to publication",
+ get_namespace_name(relSchemaId),
+ RelationGetRelationName(rel)),
+ errdetail("Sequence's schema \"%s\" is already part of the publication or part of the specified schema list.",
+ get_namespace_name(relSchemaId)));
}
}
}
@@ -625,7 +675,10 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
bool publish_via_partition_root_given;
bool publish_via_partition_root;
AclResult aclresult;
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
/* must have CREATE privilege on database */
@@ -672,6 +725,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
values[Anum_pg_publication_puballtables - 1] =
BoolGetDatum(stmt->for_all_tables);
+ values[Anum_pg_publication_puballsequences - 1] =
+ BoolGetDatum(stmt->for_all_sequences);
values[Anum_pg_publication_pubinsert - 1] =
BoolGetDatum(pubactions.pubinsert);
values[Anum_pg_publication_pubupdate - 1] =
@@ -680,6 +735,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -704,38 +761,62 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
}
else
{
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
- if (list_length(schemaidlist) > 0 && !superuser())
+ if (list_length(tables_schemaidlist) > 0 && !superuser())
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES IN SCHEMA publication"));
- if (list_length(relations) > 0)
+ if (list_length(tables) > 0)
{
List *rels;
- rels = OpenTableList(relations);
- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ rels = OpenRelationList(tables);
+ CheckObjSchemaNotAlreadyInPublication(rels, tables_schemaidlist,
PUBLICATIONOBJ_TABLE);
TransformPubWhereClauses(rels, pstate->p_sourcetext,
publish_via_partition_root);
- PublicationAddTables(puboid, rels, true, NULL);
- CloseTableList(rels);
+ PublicationAddRelations(puboid, rels, true, NULL);
+ CloseRelationList(rels);
}
- if (list_length(schemaidlist) > 0)
+ if (list_length(sequences) > 0)
+ {
+ List *rels;
+
+ rels = OpenRelationList(sequences);
+ CheckObjSchemaNotAlreadyInPublication(rels, sequences_schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddRelations(puboid, rels, true, NULL);
+ CloseRelationList(rels);
+ }
+
+ if (list_length(tables_schemaidlist) > 0)
{
/*
* Schema lock is held until the publication is created to prevent
* concurrent schema deletion.
*/
- LockSchemaList(schemaidlist);
- PublicationAddSchemas(puboid, schemaidlist, true, NULL);
+ LockSchemaList(tables_schemaidlist);
+ PublicationAddSchemas(puboid, tables_schemaidlist, false, true, NULL);
+ }
+
+ if (list_length(sequences_schemaidlist) > 0)
+ {
+ /*
+ * Schema lock is held until the publication is created to prevent
+ * concurrent schema deletion.
+ */
+ LockSchemaList(sequences_schemaidlist);
+ PublicationAddSchemas(puboid, sequences_schemaidlist, true, true, NULL);
}
}
@@ -797,7 +878,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
LockDatabaseObject(PublicationRelationId, pubform->oid, 0,
AccessShareLock);
- root_relids = GetPublicationRelations(pubform->oid,
+ root_relids = GetPublicationRelations(pubform->oid, false,
PUBLICATION_PART_ROOT);
foreach(lc, root_relids)
@@ -856,6 +937,9 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
values[Anum_pg_publication_pubtruncate - 1] = BoolGetDatum(pubactions.pubtruncate);
replaces[Anum_pg_publication_pubtruncate - 1] = true;
+
+ values[Anum_pg_publication_pubsequence - 1] = BoolGetDatum(pubactions.pubsequence);
+ replaces[Anum_pg_publication_pubsequence - 1] = true;
}
if (publish_via_partition_root_given)
@@ -875,7 +959,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
pubform = (Form_pg_publication) GETSTRUCT(tup);
/* Invalidate the relcache. */
- if (pubform->puballtables)
+ if (pubform->puballtables) /* FIXME probably needs to handle puballsequences too? */
{
CacheInvalidateRelcacheAll();
}
@@ -890,7 +974,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
* trees, not just those explicitly mentioned in the publication.
*/
if (root_relids == NIL)
- relids = GetPublicationRelations(pubform->oid,
+ relids = GetPublicationRelations(pubform->oid, false,
PUBLICATION_PART_ALL);
else
{
@@ -904,7 +988,17 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
lfirst_oid(lc));
}
- schemarelids = GetAllSchemaPublicationRelations(pubform->oid,
+ /* tables */
+ schemarelids = GetAllSchemaPublicationRelations(pubform->oid, false,
+ PUBLICATION_PART_ALL);
+ relids = list_concat_unique_oid(relids, schemarelids);
+
+ /* sequences */
+ relids = list_concat_unique_oid(relids,
+ GetPublicationRelations(pubform->oid,
+ true, PUBLICATION_PART_ALL));
+
+ schemarelids = GetAllSchemaPublicationRelations(pubform->oid, true,
PUBLICATION_PART_ALL);
relids = list_concat_unique_oid(relids, schemarelids);
@@ -951,6 +1045,11 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
Oid pubid = pubform->oid;
+ /*
+ * FIXME Do we need to test relkind of the relations? Otherwise we
+ * can do "ADD TABLE sequence" and it'll "work".
+ */
+
/*
* Nothing to do if no objects, except in SET: for that it is quite
* possible that user has not specified any tables in which case we need
@@ -959,7 +1058,7 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
if (!tables && stmt->action != AP_SetObjects)
return;
- rels = OpenTableList(tables);
+ rels = OpenRelationList(tables);
if (stmt->action == AP_AddObjects)
{
@@ -969,19 +1068,19 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
* Check if the relation is member of the existing schema in the
* publication or member of the schema list specified.
*/
- schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid));
+ schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid, false));
CheckObjSchemaNotAlreadyInPublication(rels, schemas,
PUBLICATIONOBJ_TABLE);
TransformPubWhereClauses(rels, queryString, pubform->pubviaroot);
- PublicationAddTables(pubid, rels, false, stmt);
+ PublicationAddRelations(pubid, rels, false, stmt);
}
else if (stmt->action == AP_DropObjects)
- PublicationDropTables(pubid, rels, false);
+ PublicationDropRelations(pubid, rels, false);
else /* AP_SetObjects */
{
- List *oldrelids = GetPublicationRelations(pubid,
+ List *oldrelids = GetPublicationRelations(pubid, false,
PUBLICATION_PART_ROOT);
List *delrels = NIL;
ListCell *oldlc;
@@ -1054,6 +1153,10 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
*/
if (!found)
{
+ /* don't drop sequences (not in list of tables) */
+ if (get_rel_relkind(oldrelid) == RELKIND_SEQUENCE)
+ continue;
+
oldrel = palloc(sizeof(PublicationRelInfo));
oldrel->whereClause = NULL;
oldrel->relation = table_open(oldrelid,
@@ -1063,18 +1166,18 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
}
/* And drop them. */
- PublicationDropTables(pubid, delrels, true);
+ PublicationDropRelations(pubid, delrels, true);
/*
* Don't bother calculating the difference for adding, we'll catch and
* skip existing ones when doing catalog update.
*/
- PublicationAddTables(pubid, rels, true, stmt);
+ PublicationAddRelations(pubid, rels, true, stmt);
- CloseTableList(delrels);
+ CloseRelationList(delrels);
}
- CloseTableList(rels);
+ CloseRelationList(rels);
}
/*
@@ -1084,7 +1187,8 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
*/
static void
AlterPublicationSchemas(AlterPublicationStmt *stmt,
- HeapTuple tup, List *schemaidlist)
+ HeapTuple tup, List *schemaidlist,
+ bool sequences)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
@@ -1106,20 +1210,20 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
List *rels;
List *reloids;
- reloids = GetPublicationRelations(pubform->oid, PUBLICATION_PART_ROOT);
+ reloids = GetPublicationRelations(pubform->oid, sequences, PUBLICATION_PART_ROOT);
rels = OpenRelIdList(reloids);
CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
PUBLICATIONOBJ_TABLES_IN_SCHEMA);
- CloseTableList(rels);
- PublicationAddSchemas(pubform->oid, schemaidlist, false, stmt);
+ CloseRelationList(rels);
+ PublicationAddSchemas(pubform->oid, schemaidlist, sequences, false, stmt);
}
else if (stmt->action == AP_DropObjects)
- PublicationDropSchemas(pubform->oid, schemaidlist, false);
+ PublicationDropSchemas(pubform->oid, schemaidlist, sequences, false);
else /* AP_SetObjects */
{
- List *oldschemaids = GetPublicationSchemas(pubform->oid);
+ List *oldschemaids = GetPublicationSchemas(pubform->oid, sequences);
List *delschemas = NIL;
/* Identify which schemas should be dropped */
@@ -1132,13 +1236,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
LockSchemaList(delschemas);
/* And drop them */
- PublicationDropSchemas(pubform->oid, delschemas, true);
+ PublicationDropSchemas(pubform->oid, delschemas, sequences, true);
/*
* Don't bother calculating the difference for adding, we'll catch and
* skip existing ones when doing catalog update.
*/
- PublicationAddSchemas(pubform->oid, schemaidlist, true, stmt);
+ PublicationAddSchemas(pubform->oid, schemaidlist, sequences, true, stmt);
}
}
@@ -1148,12 +1252,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
*/
static void
CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
- List *tables, List *schemaidlist)
+ List *tables, List *tables_schemaidlist,
+ List *sequences, List *sequences_schemaidlist)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
- schemaidlist && !superuser())
+ (tables_schemaidlist || sequences_schemaidlist) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to add or set schemas")));
@@ -1162,13 +1267,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
* Check that user is allowed to manipulate the publication tables in
* schema
*/
- if (schemaidlist && pubform->puballtables)
+ if (tables_schemaidlist && pubform->puballtables)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables from schema cannot be added to, dropped from, or set on FOR ALL TABLES publications.")));
+ /*
+ * Check that user is allowed to manipulate the publication sequences in
+ * schema
+ */
+ if (sequences_schemaidlist && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
/* Check that user is allowed to manipulate the publication tables. */
if (tables && pubform->puballtables)
ereport(ERROR,
@@ -1176,6 +1292,118 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (sequences && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+ List *sequences, List *schemaidlist)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /*
+ * FIXME Do we need to test relkind of the relations? Otherwise we
+ * can do "ADD SEQUENCE table" and it'll "work".
+ */
+
+ /*
+ * It is quite possible that for the SET case user has not specified any
+ * tables in which case we need to remove all the existing tables.
+ */
+ if (!sequences && stmt->action != AP_SetObjects)
+ return;
+
+ rels = OpenRelationList(sequences);
+
+ if (stmt->action == AP_AddObjects)
+ {
+ List *schemas = NIL;
+
+ /*
+ * Check if the relation is member of the existing schema in the
+ * publication or member of the schema list specified.
+ */
+ schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid, true));
+ CheckObjSchemaNotAlreadyInPublication(rels, schemas,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddRelations(pubid, rels, false, stmt);
+ }
+ else if (stmt->action == AP_DropObjects)
+ PublicationDropRelations(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid, true,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ PublicationRelInfo *newpubrel;
+
+ newpubrel = (PublicationRelInfo *) lfirst(newlc);
+ if (RelationGetRelid(newpubrel->relation) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+ /* Not yet in the list, open it and add to the list */
+ if (!found)
+ {
+ Relation oldrel;
+ PublicationRelInfo *pubrel;
+
+ /* don't drop non-sequences (not in list of sequences) */
+ if (get_rel_relkind(oldrelid) != RELKIND_SEQUENCE)
+ continue;
+
+ /* Wrap relation into PublicationRelInfo */
+ oldrel = table_open(oldrelid, ShareUpdateExclusiveLock);
+
+ pubrel = palloc(sizeof(PublicationRelInfo));
+ pubrel->relation = oldrel;
+ pubrel->whereClause = NULL;
+
+ delrels = lappend(delrels, pubrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropRelations(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddRelations(pubid, rels, true, stmt);
+
+ CloseRelationList(delrels);
+ }
+
+ CloseRelationList(rels);
}
/*
@@ -1213,14 +1441,22 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
AlterPublicationOptions(pstate, stmt, rel, tup);
else
{
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
Oid pubid = pubform->oid;
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
- CheckAlterPublication(stmt, tup, relations, schemaidlist);
+ CheckAlterPublication(stmt, tup,
+ tables, tables_schemaidlist,
+ sequences, sequences_schemaidlist);
heap_freetuple(tup);
@@ -1248,9 +1484,14 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
errmsg("publication \"%s\" does not exist",
stmt->pubname));
- AlterPublicationTables(stmt, tup, relations, schemaidlist,
+ AlterPublicationTables(stmt, tup, tables, tables_schemaidlist,
pstate->p_sourcetext);
- AlterPublicationSchemas(stmt, tup, schemaidlist);
+
+ AlterPublicationSequences(stmt, tup, sequences, sequences_schemaidlist);
+
+ AlterPublicationSchemas(stmt, tup, tables_schemaidlist, false);
+
+ AlterPublicationSchemas(stmt, tup, sequences_schemaidlist, true);
}
/* Cleanup. */
@@ -1318,7 +1559,7 @@ RemovePublicationById(Oid pubid)
pubform = (Form_pg_publication) GETSTRUCT(tup);
/* Invalidate relcache so that publication info is rebuilt. */
- if (pubform->puballtables)
+ if (pubform->puballtables) /* FIXME handle puballsequences too? */
CacheInvalidateRelcacheAll();
CatalogTupleDelete(rel, &tup->t_self);
@@ -1354,6 +1595,7 @@ RemovePublicationSchemaById(Oid psoid)
* partitions.
*/
schemaRels = GetSchemaPublicationRelations(pubsch->pnnspid,
+ pubsch->pnsequences,
PUBLICATION_PART_ALL);
InvalidatePublicationRels(schemaRels);
@@ -1396,17 +1638,17 @@ OpenRelIdList(List *relids)
* add them to a publication.
*/
static List *
-OpenTableList(List *tables)
+OpenRelationList(List *rels)
{
List *relids = NIL;
- List *rels = NIL;
+ List *result = NIL;
ListCell *lc;
List *relids_with_rf = NIL;
/*
* Open, share-lock, and check all the explicitly-specified relations
*/
- foreach(lc, tables)
+ foreach(lc, rels)
{
PublicationTable *t = lfirst_node(PublicationTable, lc);
bool recurse = t->relation->inh;
@@ -1443,7 +1685,7 @@ OpenTableList(List *tables)
pub_rel = palloc(sizeof(PublicationRelInfo));
pub_rel->relation = rel;
pub_rel->whereClause = t->whereClause;
- rels = lappend(rels, pub_rel);
+ result = lappend(result, pub_rel);
relids = lappend_oid(relids, myrelid);
if (t->whereClause)
@@ -1497,7 +1739,7 @@ OpenTableList(List *tables)
pub_rel->relation = rel;
/* child inherits WHERE clause from parent */
pub_rel->whereClause = t->whereClause;
- rels = lappend(rels, pub_rel);
+ result = lappend(result, pub_rel);
relids = lappend_oid(relids, childrelid);
if (t->whereClause)
@@ -1509,14 +1751,14 @@ OpenTableList(List *tables)
list_free(relids);
list_free(relids_with_rf);
- return rels;
+ return result;
}
/*
* Close all relations in the list.
*/
static void
-CloseTableList(List *rels)
+CloseRelationList(List *rels)
{
ListCell *lc;
@@ -1564,7 +1806,7 @@ LockSchemaList(List *schemalist)
* Add listed tables to the publication.
*/
static void
-PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
+PublicationAddRelations(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt)
{
ListCell *lc;
@@ -1598,7 +1840,7 @@ PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
* Remove listed tables from the publication.
*/
static void
-PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
+PublicationDropRelations(Oid pubid, List *rels, bool missing_ok)
{
ObjectAddress obj;
ListCell *lc;
@@ -1638,8 +1880,8 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
* Add listed schemas to the publication.
*/
static void
-PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
- AlterPublicationStmt *stmt)
+PublicationAddSchemas(Oid pubid, List *schemas, bool sequences,
+ bool if_not_exists, AlterPublicationStmt *stmt)
{
ListCell *lc;
@@ -1650,7 +1892,7 @@ PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
Oid schemaid = lfirst_oid(lc);
ObjectAddress obj;
- obj = publication_add_schema(pubid, schemaid, if_not_exists);
+ obj = publication_add_schema(pubid, schemaid, sequences, if_not_exists);
if (stmt)
{
EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
@@ -1666,7 +1908,7 @@ PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
* Remove listed schemas from the publication.
*/
static void
-PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
+PublicationDropSchemas(Oid pubid, List *schemas, bool sequences, bool missing_ok)
{
ObjectAddress obj;
ListCell *lc;
@@ -1676,10 +1918,11 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
{
Oid schemaid = lfirst_oid(lc);
- psid = GetSysCacheOid2(PUBLICATIONNAMESPACEMAP,
+ psid = GetSysCacheOid3(PUBLICATIONNAMESPACEMAP,
Anum_pg_publication_namespace_oid,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pubid));
+ ObjectIdGetDatum(pubid),
+ BoolGetDatum(sequences));
if (!OidIsValid(psid))
{
if (missing_ok)
@@ -1734,6 +1977,13 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
NameStr(form->pubname)),
errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+ if (form->puballsequences && !superuser_arg(newOwnerId))
+ ereport(ERROR,
+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("permission denied to change owner of publication \"%s\"",
+ NameStr(form->pubname)),
+ errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser.")));
+
if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index ab592ce2f15..f3641345b32 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -336,6 +336,85 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Set a sequence to a specified internal state.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+SetSequence(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*/
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 3ef6607d246..54daf2f2514 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -85,6 +85,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -495,9 +496,9 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
{
char *err;
WalReceiverConn *wrconn;
- List *tables;
+ List *relations;
ListCell *lc;
- char table_state;
+ char sync_state;
/* Try to connect to the publisher. */
wrconn = walrcv_connect(conninfo, true, stmt->subname, &err);
@@ -512,14 +513,17 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
* Set sync state based on if we were asked to do data copy or
* not.
*/
- table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+ sync_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
/*
- * Get the table list from publisher and build local table status
- * info.
+ * Get the table and sequence list from publisher and build
+ * local relation sync status info.
*/
- tables = fetch_table_list(wrconn, publications);
- foreach(lc, tables)
+ relations = fetch_table_list(wrconn, publications);
+ relations = list_concat(relations,
+ fetch_sequence_list(wrconn, publications));
+
+ foreach(lc, relations)
{
RangeVar *rv = (RangeVar *) lfirst(lc);
Oid relid;
@@ -530,7 +534,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
CheckSubscriptionRelkind(get_rel_relkind(relid),
rv->schemaname, rv->relname);
- AddSubscriptionRelState(subid, relid, table_state,
+ AddSubscriptionRelState(subid, relid, sync_state,
InvalidXLogRecPtr);
}
@@ -556,12 +560,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
*
* Note that if tables were specified but copy_data is false
* then it is safe to enable two_phase up-front because those
- * tables are already initially in READY state. When the
- * subscription has no tables, we leave the twophase state as
- * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
+ * relations are already initially in READY state. When the
+ * subscription has no relations, we leave the twophase state
+ * as PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
* PUBLICATION to work.
*/
- if (opts.twophase && !opts.copy_data && tables != NIL)
+ if (opts.twophase && !opts.copy_data && relations != NIL)
twophase_enabled = true;
walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -631,8 +635,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
PG_TRY();
{
- /* Get the table list from publisher. */
+ /* Get the list of relations from publisher. */
pubrel_names = fetch_table_list(wrconn, sub->publications);
+ pubrel_names = list_concat(pubrel_names,
+ fetch_sequence_list(wrconn, sub->publications));
/* Get local table list. */
subrel_states = GetSubscriptionRelations(sub->oid);
@@ -797,6 +803,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
ReplicationSlotDropAtPubNode(wrconn, syncslotname, true);
}
}
+
}
PG_FINALLY();
{
@@ -1616,6 +1623,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated sequences from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 09f78f22441..cbcf19dd9b7 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -635,7 +635,7 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE && relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index d4f8455a2bd..cc936328a32 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4863,6 +4863,7 @@ _copyCreatePublicationStmt(const CreatePublicationStmt *from)
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
COPY_SCALAR_FIELD(for_all_tables);
+ COPY_SCALAR_FIELD(for_all_sequences);
return newnode;
}
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index f1002afe7a0..692cf0e1352 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2346,6 +2346,7 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_SCALAR_FIELD(for_all_sequences);
COMPARE_SCALAR_FIELD(action);
return true;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a03b33b53bd..6ff0ddd62bb 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -9701,12 +9701,9 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
*
* CREATE PUBLICATION FOR ALL TABLES [WITH options]
*
- * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
- *
- * pub_obj is one of:
+ * CREATE PUBLICATION FOR ALL SEQUENCES [WITH options]
*
- * TABLE table [, ...]
- * ALL TABLES IN SCHEMA schema [, ...]
+ * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
*
*****************************************************************************/
@@ -9726,6 +9723,14 @@ CreatePublicationStmt:
n->for_all_tables = true;
$$ = (Node *)n;
}
+ | CREATE PUBLICATION name FOR ALL SEQUENCES opt_definition
+ {
+ CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
+ n->pubname = $3;
+ n->options = $7;
+ n->for_all_sequences = true;
+ $$ = (Node *)n;
+ }
| CREATE PUBLICATION name FOR pub_obj_list opt_definition
{
CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
@@ -9772,6 +9777,26 @@ PublicationObjSpec:
$$->pubobjtype = PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA;
$$->location = @5;
}
+ | SEQUENCE relation_expr
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+ $$->pubtable = makeNode(PublicationTable);
+ $$->pubtable->relation = $2;
+ }
+ | ALL SEQUENCES IN_P SCHEMA ColId
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA;
+ $$->name = $5;
+ $$->location = @5;
+ }
+ | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA;
+ $$->location = @5;
+ }
| ColId OptWhereClause
{
$$ = makeNode(PublicationObjSpec);
@@ -9836,11 +9861,6 @@ pub_obj_list: PublicationObjSpec
*
* ALTER PUBLICATION name SET pub_obj [, ...]
*
- * pub_obj is one of:
- *
- * TABLE table_name [, ...]
- * ALL TABLES IN SCHEMA schema_name [, ...]
- *
*****************************************************************************/
AlterPublicationStmt:
@@ -10124,6 +10144,12 @@ UnlistenStmt:
}
;
+/*
+ * FIXME
+ *
+ * opt_publication_for_sequences and publication_for_sequences should be
+ * copies for sequences
+ */
/*****************************************************************************
*
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index c9b0eeefd7e..3dbe85d61a2 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -648,6 +648,56 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1203,6 +1253,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 1659964571c..8f84598d322 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -359,6 +360,12 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX This needs to handle sequences too - after AlterSubscription_refresh
+ * starts caring about sequences, GetSubscriptionNotReadyRelations won't
+ * return just tables, and we'll have to sync them here. Not sure it's worth
+ * creating a new "sync" worker per sequence, maybe we should just sync them
+ * in the current process (it's pretty light-weight).
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -999,6 +1006,99 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+
+
+/*
+ * FIXME add comment
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ List *qual = NIL;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel, &qual);
+
+ /* sequences don't have row filters */
+ Assert(!qual);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ SetSequence(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
+
+
+
/*
* Determine the tablesync slot name.
*
@@ -1260,10 +1360,20 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 8653e1d8402..7181520023b 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -142,6 +142,7 @@
#include "catalog/pg_subscription.h"
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_tablespace.h"
+#include "commands/sequence.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
#include "commands/trigger.h"
@@ -1095,6 +1096,57 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by SetSequence). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by SetSequence */
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ /* apply the sequence change */
+ SetSequence(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2423,6 +2475,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index ea57a0477f0..60bbc524b19 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -53,6 +53,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation relation, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -208,6 +212,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -224,6 +229,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -237,6 +243,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -244,6 +251,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -312,6 +320,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -1440,6 +1458,51 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation relation, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(relation))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, relation);
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ relation,
+ xid,
+ sequence_lsn,
+ transactional,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1725,7 +1788,8 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->schema_sent = false;
entry->streamed_txns = NIL;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->new_slot = NULL;
entry->old_slot = NULL;
memset(entry->exprstate, 0, sizeof(entry->exprstate));
@@ -1751,6 +1815,7 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
bool am_partition = get_rel_relispartition(relid);
char relkind = get_rel_relkind(relid);
List *rel_publications = NIL;
+ bool is_sequence = (get_rel_relkind(relid) == RELKIND_SEQUENCE);
/* Reload publications if needed before use. */
if (!publications_valid)
@@ -1779,6 +1844,7 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
/*
* Tuple slots cleanups. (Will be rebuilt later if needed).
@@ -1815,12 +1881,23 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
Publication *pub = lfirst(lc);
bool publish = false;
- if (pub->alltables)
+ if (pub->alltables && (!is_sequence))
{
publish = true;
if (pub->pubviaroot && am_partition)
publish_as_relid = llast_oid(get_partition_ancestors(relid));
}
+ else if (pub->allsequences && is_sequence)
+ {
+ publish = true;
+ }
+
+ /* if a sequence, just cross-check the list of publications */
+ if (!publish && is_sequence)
+ {
+ if (list_member_oid(pubids, pub->oid))
+ publish = true;
+ }
if (!publish)
{
@@ -1866,6 +1943,7 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
rel_publications = lappend(rel_publications, pub);
}
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index fccffce5729..c29d822313d 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5587,7 +5587,15 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
GetSchemaPublications(schemaid));
}
}
- puboids = list_concat_unique_oid(puboids, GetAllTablesPublications());
+
+ /*
+ * Consider also FOR ALL TABLES and FOR ALL SEQUENCES publications,
+ * depending on the relkind of the relation.
+ */
+ if (relation->rd_rel->relkind == RELKIND_SEQUENCE)
+ puboids = list_concat_unique_oid(puboids, GetAllSequencesPublications());
+ else
+ puboids = list_concat_unique_oid(puboids, GetAllTablesPublications());
foreach(lc, puboids)
{
@@ -5606,6 +5614,7 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
pubdesc->pubactions.pubupdate |= pubform->pubupdate;
pubdesc->pubactions.pubdelete |= pubform->pubdelete;
pubdesc->pubactions.pubtruncate |= pubform->pubtruncate;
+ pubdesc->pubactions.pubsequence |= pubform->pubsequence;
/*
* Check if all columns referenced in the filter expression are part of
@@ -5631,6 +5640,8 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
* If we know everything is replicated and the row filter is invalid
* for update and delete, there is no point to check for other
* publications.
+ *
+ * XXX We ignore sequences here, because those don't use row filters.
*/
if (pubdesc->pubactions.pubinsert && pubdesc->pubactions.pubupdate &&
pubdesc->pubactions.pubdelete && pubdesc->pubactions.pubtruncate &&
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index f4e7819f1e2..763aac81596 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -630,12 +630,12 @@ static const struct cachedesc cacheinfo[] = {
64
},
{PublicationNamespaceRelationId, /* PUBLICATIONNAMESPACEMAP */
- PublicationNamespacePnnspidPnpubidIndexId,
- 2,
+ PublicationNamespacePnnspidPnpubidSeqIndexId,
+ 3,
{
Anum_pg_publication_namespace_pnnspid,
Anum_pg_publication_namespace_pnpubid,
- 0,
+ Anum_pg_publication_namespace_pnsequences,
0
},
64
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e69dcf8a484..35a8fc76310 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -3788,10 +3788,12 @@ getPublications(Archive *fout, int *numPublications)
int i_pubname;
int i_pubowner;
int i_puballtables;
+ int i_puballsequences;
int i_pubinsert;
int i_pubupdate;
int i_pubdelete;
int i_pubtruncate;
+ int i_pubsequence;
int i_pubviaroot;
int i,
ntups;
@@ -3807,23 +3809,29 @@ getPublications(Archive *fout, int *numPublications)
resetPQExpBuffer(query);
/* Get the publications. */
- if (fout->remoteVersion >= 130000)
+ if (fout->remoteVersion >= 150000)
+ appendPQExpBuffer(query,
+ "SELECT p.tableoid, p.oid, p.pubname, "
+ "p.pubowner, "
+ "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubsequence, p.pubviaroot "
+ "FROM pg_publication p");
+ else if (fout->remoteVersion >= 130000)
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+ "p.puballtables, false AS puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubsequence, p.pubviaroot "
"FROM pg_publication p");
else if (fout->remoteVersion >= 110000)
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+ "p.puballtables, false AS puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubsequence, false AS pubviaroot "
"FROM pg_publication p");
else
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+ "p.puballtables, false AS puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubsequence, false AS pubviaroot "
"FROM pg_publication p");
res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -3835,10 +3843,12 @@ getPublications(Archive *fout, int *numPublications)
i_pubname = PQfnumber(res, "pubname");
i_pubowner = PQfnumber(res, "pubowner");
i_puballtables = PQfnumber(res, "puballtables");
+ i_puballsequences = PQfnumber(res, "puballsequences");
i_pubinsert = PQfnumber(res, "pubinsert");
i_pubupdate = PQfnumber(res, "pubupdate");
i_pubdelete = PQfnumber(res, "pubdelete");
i_pubtruncate = PQfnumber(res, "pubtruncate");
+ i_pubsequence = PQfnumber(res, "pubsequence");
i_pubviaroot = PQfnumber(res, "pubviaroot");
pubinfo = pg_malloc(ntups * sizeof(PublicationInfo));
@@ -3854,6 +3864,8 @@ getPublications(Archive *fout, int *numPublications)
pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
pubinfo[i].puballtables =
(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+ pubinfo[i].puballsequences =
+ (strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
pubinfo[i].pubinsert =
(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
pubinfo[i].pubupdate =
@@ -3862,6 +3874,8 @@ getPublications(Archive *fout, int *numPublications)
(strcmp(PQgetvalue(res, i, i_pubdelete), "t") == 0);
pubinfo[i].pubtruncate =
(strcmp(PQgetvalue(res, i, i_pubtruncate), "t") == 0);
+ pubinfo[i].pubsequence =
+ (strcmp(PQgetvalue(res, i, i_pubsequence), "t") == 0);
pubinfo[i].pubviaroot =
(strcmp(PQgetvalue(res, i, i_pubviaroot), "t") == 0);
@@ -3907,6 +3921,9 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
if (pubinfo->puballtables)
appendPQExpBufferStr(query, " FOR ALL TABLES");
+ if (pubinfo->puballsequences)
+ appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
+
appendPQExpBufferStr(query, " WITH (publish = '");
if (pubinfo->pubinsert)
{
@@ -3941,6 +3958,15 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
first = false;
}
+ if (pubinfo->pubsequence)
+ {
+ if (!first)
+ appendPQExpBufferStr(query, ", ");
+
+ appendPQExpBufferStr(query, "sequence");
+ first = false;
+ }
+
appendPQExpBufferStr(query, "'");
if (pubinfo->pubviaroot)
@@ -3987,6 +4013,7 @@ getPublicationNamespaces(Archive *fout)
int i_oid;
int i_pnpubid;
int i_pnnspid;
+ int i_pnsequences;
int i,
j,
ntups;
@@ -3998,7 +4025,7 @@ getPublicationNamespaces(Archive *fout)
/* Collect all publication membership info. */
appendPQExpBufferStr(query,
- "SELECT tableoid, oid, pnpubid, pnnspid "
+ "SELECT tableoid, oid, pnpubid, pnnspid, pnsequences "
"FROM pg_catalog.pg_publication_namespace");
res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4008,6 +4035,7 @@ getPublicationNamespaces(Archive *fout)
i_oid = PQfnumber(res, "oid");
i_pnpubid = PQfnumber(res, "pnpubid");
i_pnnspid = PQfnumber(res, "pnnspid");
+ i_pnsequences = PQfnumber(res, "pnsequences");
/* this allocation may be more than we need */
pubsinfo = pg_malloc(ntups * sizeof(PublicationSchemaInfo));
@@ -4017,6 +4045,7 @@ getPublicationNamespaces(Archive *fout)
{
Oid pnpubid = atooid(PQgetvalue(res, i, i_pnpubid));
Oid pnnspid = atooid(PQgetvalue(res, i, i_pnnspid));
+ bool pnsequences = (strcmp(PQgetvalue(res, i, i_pnsequences), "t") == 0);
PublicationInfo *pubinfo;
NamespaceInfo *nspinfo;
@@ -4048,6 +4077,7 @@ getPublicationNamespaces(Archive *fout)
pubsinfo[j].dobj.name = nspinfo->dobj.name;
pubsinfo[j].publication = pubinfo;
pubsinfo[j].pubschema = nspinfo;
+ pubsinfo[j].pubsequences = pnsequences;
/* Decide whether we want to dump it */
selectDumpablePublicationObject(&(pubsinfo[j].dobj), fout);
@@ -4181,7 +4211,11 @@ dumpPublicationNamespace(Archive *fout, const PublicationSchemaInfo *pubsinfo)
query = createPQExpBuffer();
appendPQExpBuffer(query, "ALTER PUBLICATION %s ", fmtId(pubinfo->dobj.name));
- appendPQExpBuffer(query, "ADD ALL TABLES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
+
+ if (pubsinfo->pubsequences)
+ appendPQExpBuffer(query, "ADD ALL SEQUENCES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
+ else
+ appendPQExpBuffer(query, "ADD ALL TABLES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
/*
* There is no point in creating drop query as the drop is done by schema
@@ -4214,6 +4248,7 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
TableInfo *tbinfo = pubrinfo->pubtable;
PQExpBuffer query;
char *tag;
+ char *description;
/* Do nothing in data-only dump */
if (dopt->dataOnly)
@@ -4223,10 +4258,22 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
query = createPQExpBuffer();
- appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD TABLE ONLY",
- fmtId(pubinfo->dobj.name));
+ if (tbinfo->relkind == RELKIND_SEQUENCE)
+ {
+ appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD SEQUENCE",
+ fmtId(pubinfo->dobj.name));
+ description = "PUBLICATION SEQUENCE";
+ }
+ else
+ {
+ appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD TABLE ONLY",
+ fmtId(pubinfo->dobj.name));
+ description = "PUBLICATION TABLE";
+ }
+
appendPQExpBuffer(query, " %s",
fmtQualifiedDumpable(tbinfo));
+
if (pubrinfo->pubrelqual)
{
/*
@@ -4249,7 +4296,7 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
ARCHIVE_OPTS(.tag = tag,
.namespace = tbinfo->dobj.namespace->dobj.name,
.owner = pubinfo->rolname,
- .description = "PUBLICATION TABLE",
+ .description = description,
.section = SECTION_POST_DATA,
.createStmt = query->data));
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 997a3b60719..8739af69122 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -615,10 +615,12 @@ typedef struct _PublicationInfo
DumpableObject dobj;
const char *rolname;
bool puballtables;
+ bool puballsequences;
bool pubinsert;
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
bool pubviaroot;
} PublicationInfo;
@@ -643,6 +645,7 @@ typedef struct _PublicationSchemaInfo
DumpableObject dobj;
PublicationInfo *publication;
NamespaceInfo *pubschema;
+ bool pubsequences;
} PublicationSchemaInfo;
/*
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 3e55ff26f82..7447d960fdc 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2358,7 +2358,7 @@ my %tests = (
create_order => 50,
create_sql => 'CREATE PUBLICATION pub1;',
regexp => qr/^
- \QCREATE PUBLICATION pub1 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub1 WITH (publish = 'insert, update, delete, truncate, sequence');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
@@ -2378,16 +2378,27 @@ my %tests = (
create_order => 50,
create_sql => 'CREATE PUBLICATION pub3;',
regexp => qr/^
- \QCREATE PUBLICATION pub3 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub3 WITH (publish = 'insert, update, delete, truncate, sequence');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
'CREATE PUBLICATION pub4' => {
create_order => 50,
- create_sql => 'CREATE PUBLICATION pub4;',
+ create_sql => 'CREATE PUBLICATION pub4
+ FOR ALL SEQUENCES
+ WITH (publish = \'\');',
regexp => qr/^
- \QCREATE PUBLICATION pub4 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub4 FOR ALL SEQUENCES WITH (publish = '');\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ },
+
+ 'CREATE PUBLICATION pub5' => {
+ create_order => 50,
+ create_sql => 'CREATE PUBLICATION pub5;',
+ regexp => qr/^
+ \QCREATE PUBLICATION pub5 WITH (publish = 'insert, update, delete, truncate, sequence');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
@@ -2474,6 +2485,27 @@ my %tests = (
unlike => { exclude_dump_test_schema => 1, },
},
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test' => {
+ create_order => 51,
+ create_sql =>
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test;',
+ regexp => qr/^
+ \QALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test;\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
+ },
+
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public' => {
+ create_order => 52,
+ create_sql =>
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public;',
+ regexp => qr/^
+ \QALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public;\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ },
+
'CREATE SCHEMA public' => {
regexp => qr/^CREATE SCHEMA public;/m,
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index e3382933d98..3603950300e 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1621,28 +1621,19 @@ describeOneTableDetails(const char *schemaname,
if (tableinfo.relkind == RELKIND_SEQUENCE)
{
PGresult *result = NULL;
- printQueryOpt myopt = pset.popt;
- char *footers[2] = {NULL, NULL};
if (pset.sversion >= 100000)
{
printfPQExpBuffer(&buf,
- "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
- " seqstart AS \"%s\",\n"
- " seqmin AS \"%s\",\n"
- " seqmax AS \"%s\",\n"
- " seqincrement AS \"%s\",\n"
- " CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
- " seqcache AS \"%s\"\n",
- gettext_noop("Type"),
- gettext_noop("Start"),
- gettext_noop("Minimum"),
- gettext_noop("Maximum"),
- gettext_noop("Increment"),
+ "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+ " seqstart,\n"
+ " seqmin,\n"
+ " seqmax,\n"
+ " seqincrement,\n"
+ " CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+ " seqcache\n",
gettext_noop("yes"),
- gettext_noop("no"),
- gettext_noop("Cycles?"),
- gettext_noop("Cache"));
+ gettext_noop("no"));
appendPQExpBuffer(&buf,
"FROM pg_catalog.pg_sequence\n"
"WHERE seqrelid = '%s';",
@@ -1651,22 +1642,15 @@ describeOneTableDetails(const char *schemaname,
else
{
printfPQExpBuffer(&buf,
- "SELECT 'bigint' AS \"%s\",\n"
- " start_value AS \"%s\",\n"
- " min_value AS \"%s\",\n"
- " max_value AS \"%s\",\n"
- " increment_by AS \"%s\",\n"
- " CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
- " cache_value AS \"%s\"\n",
- gettext_noop("Type"),
- gettext_noop("Start"),
- gettext_noop("Minimum"),
- gettext_noop("Maximum"),
- gettext_noop("Increment"),
+ "SELECT 'bigint',\n"
+ " start_value,\n"
+ " min_value,\n"
+ " max_value,\n"
+ " increment_by,\n"
+ " CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+ " cache_value\n",
gettext_noop("yes"),
- gettext_noop("no"),
- gettext_noop("Cycles?"),
- gettext_noop("Cache"));
+ gettext_noop("no"));
appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
/* must be separate because fmtId isn't reentrant */
appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1676,6 +1660,51 @@ describeOneTableDetails(const char *schemaname,
if (!res)
goto error_return;
+ numrows = PQntuples(res);
+
+ /* XXX reset to use expanded output for sequences (maybe we should
+ * keep this disabled, just like for tables?) */
+ myopt.expanded = pset.popt.topt.expanded;
+
+ printTableInit(&cont, &myopt, title.data, 7, numrows);
+ printTableInitialized = true;
+
+ printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+ schemaname, relationname);
+
+ printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+ printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+ printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+ /* Generate table cells to be printed */
+ for (i = 0; i < numrows; i++)
+ {
+ /* Type */
+ printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+ /* Start */
+ printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+ /* Minimum */
+ printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+ /* Maximum */
+ printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+ /* Increment */
+ printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+ /* Cycles? */
+ printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+ /* Cache */
+ printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+ }
+
/* Footer information about a sequence */
/* Get the column that owns this sequence */
@@ -1709,29 +1738,63 @@ describeOneTableDetails(const char *schemaname,
switch (PQgetvalue(result, 0, 1)[0])
{
case 'a':
- footers[0] = psprintf(_("Owned by: %s"),
- PQgetvalue(result, 0, 0));
+ printTableAddFooter(&cont,
+ psprintf(_("Owned by: %s"),
+ PQgetvalue(result, 0, 0)));
break;
case 'i':
- footers[0] = psprintf(_("Sequence for identity column: %s"),
- PQgetvalue(result, 0, 0));
+ printTableAddFooter(&cont,
+ psprintf(_("Sequence for identity column: %s"),
+ PQgetvalue(result, 0, 0)));
break;
}
}
PQclear(result);
- printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
- schemaname, relationname);
+ /* print any publications */
+ if (pset.sversion >= 150000)
+ {
+ int tuples = 0;
- myopt.footers = footers;
- myopt.topt.default_footer = false;
- myopt.title = title.data;
- myopt.translate_header = true;
+ printfPQExpBuffer(&buf,
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ " JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
+ " JOIN pg_catalog.pg_class pc ON pc.relnamespace = pn.pnnspid\n"
+ "WHERE pc.oid ='%s' and pn.pnsequences and pg_catalog.pg_relation_is_publishable('%s')\n"
+ "UNION\n"
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ " JOIN pg_catalog.pg_publication_rel pr ON p.oid = pr.prpubid\n"
+ "WHERE pr.prrelid = '%s'\n"
+ "UNION\n"
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+ "ORDER BY 1;",
+ oid, oid, oid, oid);
- printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+ result = PSQLexec(buf.data);
+ if (!result)
+ goto error_return;
+ else
+ tuples = PQntuples(result);
+
+ if (tuples > 0)
+ printTableAddFooter(&cont, _("Publications:"));
+
+ /* Might be an empty set - that's ok */
+ for (i = 0; i < tuples; i++)
+ {
+ printfPQExpBuffer(&buf, " \"%s\"",
+ PQgetvalue(result, i, 0));
+
+ printTableAddFooter(&cont, buf.data);
+ }
+ PQclear(result);
+ }
- if (footers[0])
- free(footers[0]);
+ printTable(&cont, pset.queryFout, false, pset.logfile);
retval = true;
goto error_return; /* not an error, just return early */
@@ -1958,6 +2021,11 @@ describeOneTableDetails(const char *schemaname,
for (i = 0; i < cols; i++)
printTableAddHeader(&cont, headers[i], true, 'l');
+ res = PSQLexec(buf.data);
+ if (!res)
+ goto error_return;
+ numrows = PQntuples(res);
+
/* Generate table cells to be printed */
for (i = 0; i < numrows; i++)
{
@@ -2883,7 +2951,7 @@ describeOneTableDetails(const char *schemaname,
"FROM pg_catalog.pg_publication p\n"
" JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
" JOIN pg_catalog.pg_class pc ON pc.relnamespace = pn.pnnspid\n"
- "WHERE pc.oid ='%s' and pg_catalog.pg_relation_is_publishable('%s')\n"
+ "WHERE pc.oid ='%s' and (not pn.pnsequences) and pg_catalog.pg_relation_is_publishable('%s')\n"
"UNION\n"
"SELECT pubname\n"
" , pg_get_expr(pr.prqual, c.oid)\n"
@@ -4764,7 +4832,7 @@ listSchemas(const char *pattern, bool verbose, bool showSystem)
int i;
printfPQExpBuffer(&buf,
- "SELECT pubname \n"
+ "SELECT pubname, (CASE WHEN pnsequences THEN 'sequences' ELSE 'tables' END) AS pubtype\n"
"FROM pg_catalog.pg_publication p\n"
" JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
" JOIN pg_catalog.pg_namespace n ON n.oid = pn.pnnspid \n"
@@ -4793,8 +4861,9 @@ listSchemas(const char *pattern, bool verbose, bool showSystem)
/* Might be an empty set - that's ok */
for (i = 0; i < pub_schema_tuples; i++)
{
- printfPQExpBuffer(&buf, " \"%s\"",
- PQgetvalue(result, i, 0));
+ printfPQExpBuffer(&buf, " \"%s\" (%s)",
+ PQgetvalue(result, i, 0),
+ PQgetvalue(result, i, 1));
footers[i + 1] = pg_strdup(buf.data);
}
@@ -5799,7 +5868,7 @@ listPublications(const char *pattern)
PQExpBufferData buf;
PGresult *res;
printQueryOpt myopt = pset.popt;
- static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+ static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
if (pset.sversion < 100000)
{
@@ -5813,23 +5882,45 @@ listPublications(const char *pattern)
initPQExpBuffer(&buf);
- printfPQExpBuffer(&buf,
- "SELECT pubname AS \"%s\",\n"
- " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
- " puballtables AS \"%s\",\n"
- " pubinsert AS \"%s\",\n"
- " pubupdate AS \"%s\",\n"
- " pubdelete AS \"%s\"",
- gettext_noop("Name"),
- gettext_noop("Owner"),
- gettext_noop("All tables"),
- gettext_noop("Inserts"),
- gettext_noop("Updates"),
- gettext_noop("Deletes"));
+ if (pset.sversion >= 150000)
+ printfPQExpBuffer(&buf,
+ "SELECT pubname AS \"%s\",\n"
+ " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+ " puballtables AS \"%s\",\n"
+ " puballsequences AS \"%s\",\n"
+ " pubinsert AS \"%s\",\n"
+ " pubupdate AS \"%s\",\n"
+ " pubdelete AS \"%s\"",
+ gettext_noop("Name"),
+ gettext_noop("Owner"),
+ gettext_noop("All tables"),
+ gettext_noop("All sequences"),
+ gettext_noop("Inserts"),
+ gettext_noop("Updates"),
+ gettext_noop("Deletes"));
+ else
+ printfPQExpBuffer(&buf,
+ "SELECT pubname AS \"%s\",\n"
+ " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+ " puballtables AS \"%s\",\n"
+ " pubinsert AS \"%s\",\n"
+ " pubupdate AS \"%s\",\n"
+ " pubdelete AS \"%s\"",
+ gettext_noop("Name"),
+ gettext_noop("Owner"),
+ gettext_noop("All tables"),
+ gettext_noop("Inserts"),
+ gettext_noop("Updates"),
+ gettext_noop("Deletes"));
+
if (pset.sversion >= 110000)
appendPQExpBuffer(&buf,
",\n pubtruncate AS \"%s\"",
gettext_noop("Truncates"));
+ if (pset.sversion >= 150000)
+ appendPQExpBuffer(&buf,
+ ",\n pubsequence AS \"%s\"",
+ gettext_noop("Sequences"));
if (pset.sversion >= 130000)
appendPQExpBuffer(&buf,
",\n pubviaroot AS \"%s\"",
@@ -5915,6 +6006,7 @@ describePublications(const char *pattern)
PGresult *res;
bool has_pubtruncate;
bool has_pubviaroot;
+ bool has_pubsequence;
PQExpBufferData title;
printTableContent cont;
@@ -5931,6 +6023,7 @@ describePublications(const char *pattern)
has_pubtruncate = (pset.sversion >= 110000);
has_pubviaroot = (pset.sversion >= 130000);
+ has_pubsequence = (pset.sversion >= 150000);
initPQExpBuffer(&buf);
@@ -5938,12 +6031,17 @@ describePublications(const char *pattern)
"SELECT oid, pubname,\n"
" pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
" puballtables, pubinsert, pubupdate, pubdelete");
+
if (has_pubtruncate)
appendPQExpBufferStr(&buf,
", pubtruncate");
if (has_pubviaroot)
appendPQExpBufferStr(&buf,
", pubviaroot");
+ if (has_pubsequence)
+ appendPQExpBufferStr(&buf,
+ ", puballsequences, pubsequence");
+
appendPQExpBufferStr(&buf,
"\nFROM pg_catalog.pg_publication\n");
@@ -5984,6 +6082,7 @@ describePublications(const char *pattern)
char *pubid = PQgetvalue(res, i, 0);
char *pubname = PQgetvalue(res, i, 1);
bool puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
+ bool puballsequences = strcmp(PQgetvalue(res, i, 9), "t") == 0;
printTableOpt myopt = pset.popt.topt;
if (has_pubtruncate)
@@ -5991,29 +6090,43 @@ describePublications(const char *pattern)
if (has_pubviaroot)
ncols++;
+ /* sequences have two extra columns (puballsequences, pubsequences) */
+ if (has_pubsequence)
+ ncols += 2;
+
initPQExpBuffer(&title);
printfPQExpBuffer(&title, _("Publication %s"), pubname);
printTableInit(&cont, &myopt, title.data, ncols, nrows);
printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+ if (has_pubsequence)
+ printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
if (has_pubtruncate)
printTableAddHeader(&cont, gettext_noop("Truncates"), true, align);
+ if (has_pubsequence)
+ printTableAddHeader(&cont, gettext_noop("Sequences"), true, align);
if (has_pubviaroot)
printTableAddHeader(&cont, gettext_noop("Via root"), true, align);
- printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false); /* owner */
+ printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false); /* all tables */
+
+ if (has_pubsequence)
+ printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false); /* all sequences */
+
+ printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false); /* insert */
+ printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false); /* update */
+ printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false); /* delete */
if (has_pubtruncate)
- printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false); /* truncate */
+ if (has_pubsequence)
+ printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false); /* sequence */
if (has_pubviaroot)
- printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false); /* via root */
if (!puballtables)
{
@@ -6033,6 +6146,7 @@ describePublications(const char *pattern)
"WHERE c.relnamespace = n.oid\n"
" AND c.oid = pr.prrelid\n"
" AND pr.prpubid = '%s'\n"
+ " AND c.relkind != 'S'\n" /* exclude sequences */
"ORDER BY 1,2", pubid);
if (!addFooterToPublicationDesc(&buf, "Tables:", false, &cont))
goto error_return;
@@ -6044,7 +6158,7 @@ describePublications(const char *pattern)
"SELECT n.nspname\n"
"FROM pg_catalog.pg_namespace n\n"
" JOIN pg_catalog.pg_publication_namespace pn ON n.oid = pn.pnnspid\n"
- "WHERE pn.pnpubid = '%s'\n"
+ "WHERE pn.pnpubid = '%s' AND NOT pn.pnsequences\n"
"ORDER BY 1", pubid);
if (!addFooterToPublicationDesc(&buf, "Tables from schemas:",
true, &cont))
@@ -6052,6 +6166,37 @@ describePublications(const char *pattern)
}
}
+ if (!puballsequences)
+ {
+ /* Get the tables for the specified publication */
+ printfPQExpBuffer(&buf,
+ "SELECT n.nspname, c.relname, NULL\n"
+ "FROM pg_catalog.pg_class c,\n"
+ " pg_catalog.pg_namespace n,\n"
+ " pg_catalog.pg_publication_rel pr\n"
+ "WHERE c.relnamespace = n.oid\n"
+ " AND c.oid = pr.prrelid\n"
+ " AND pr.prpubid = '%s'\n"
+ " AND c.relkind = 'S'\n" /* only sequences */
+ "ORDER BY 1,2", pubid);
+ if (!addFooterToPublicationDesc(&buf, "Sequences:", false, &cont))
+ goto error_return;
+
+ if (pset.sversion >= 150000)
+ {
+ /* Get the schemas for the specified publication */
+ printfPQExpBuffer(&buf,
+ "SELECT n.nspname\n"
+ "FROM pg_catalog.pg_namespace n\n"
+ " JOIN pg_catalog.pg_publication_namespace pn ON n.oid = pn.pnnspid\n"
+ "WHERE pn.pnpubid = '%s' AND pn.pnsequences\n"
+ "ORDER BY 1", pubid);
+ if (!addFooterToPublicationDesc(&buf, "Sequences from schemas:",
+ true, &cont))
+ goto error_return;
+ }
+ }
+
printTable(&cont, pset.queryFout, false, pset.logfile);
printTableCleanup(&cont);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 6957567264a..a8cdfc4cfba 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1782,11 +1782,15 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") ||
(HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") &&
ends_with(prev_wd, ',')))
COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "SEQUENCE") ||
+ (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "SEQUENCE") &&
+ ends_with(prev_wd, ',')))
+ COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_sequences);
/*
* "ALTER PUBLICATION <name> SET TABLE <name> WHERE (" - complete with
* table attributes
@@ -1805,11 +1809,11 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH(",");
/* ALTER PUBLICATION <name> DROP */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
- COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE");
- else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
+ COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES|SEQUENCES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY_PLUS(Query_for_list_of_schemas
" AND nspname NOT LIKE E'pg\\\\_%'",
"CURRENT_SCHEMA");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d8e8715ed1c..699bd0aa3e3 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11540,6 +11540,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index ba72e62e614..f2b4e838c77 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct PublicationDesc
@@ -92,6 +102,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -122,26 +133,30 @@ typedef enum PublicationPartOpt
PUBLICATION_PART_ALL,
} PublicationPartOpt;
-extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
+extern List *GetPublicationRelations(Oid pubid, bool sequences,
+ PublicationPartOpt pub_partopt);
extern List *GetAllTablesPublications(void);
extern List *GetAllTablesPublicationRelations(bool pubviaroot);
-extern List *GetPublicationSchemas(Oid pubid);
+extern List *GetPublicationSchemas(Oid pubid, bool sequence);
extern List *GetSchemaPublications(Oid schemaid);
-extern List *GetSchemaPublicationRelations(Oid schemaid,
+extern List *GetSchemaPublicationRelations(Oid schemaid, bool sequences,
PublicationPartOpt pub_partopt);
-extern List *GetAllSchemaPublicationRelations(Oid puboid,
+extern List *GetAllSchemaPublicationRelations(Oid puboid, bool sequences,
PublicationPartOpt pub_partopt);
extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
Oid relid);
extern Oid GetTopMostAncestorInPublication(Oid puboid, List *ancestors);
+extern List *GetAllSequencesPublications(void);
+extern List *GetAllSequencesPublicationRelations(void);
+
extern bool is_publishable_relation(Relation rel);
extern bool is_schema_publication(Oid pubid);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *pri,
bool if_not_exists);
extern ObjectAddress publication_add_schema(Oid pubid, Oid schemaid,
- bool if_not_exists);
+ bool sequences, bool if_not_exists);
extern Oid get_publication_oid(const char *pubname, bool missing_ok);
extern char *get_publication_name(Oid pubid, bool missing_ok);
diff --git a/src/include/catalog/pg_publication_namespace.h b/src/include/catalog/pg_publication_namespace.h
index e4306da02e7..bbc22dec9f8 100644
--- a/src/include/catalog/pg_publication_namespace.h
+++ b/src/include/catalog/pg_publication_namespace.h
@@ -32,6 +32,7 @@ CATALOG(pg_publication_namespace,8901,PublicationNamespaceRelationId)
Oid oid; /* oid */
Oid pnpubid BKI_LOOKUP(pg_publication); /* Oid of the publication */
Oid pnnspid BKI_LOOKUP(pg_namespace); /* Oid of the schema */
+ bool pnsequences; /* tables or sequences from the schema? */
} FormData_pg_publication_namespace;
/* ----------------
@@ -42,6 +43,6 @@ CATALOG(pg_publication_namespace,8901,PublicationNamespaceRelationId)
typedef FormData_pg_publication_namespace *Form_pg_publication_namespace;
DECLARE_UNIQUE_INDEX_PKEY(pg_publication_namespace_oid_index, 8902, PublicationNamespaceObjectIndexId, on pg_publication_namespace using btree(oid oid_ops));
-DECLARE_UNIQUE_INDEX(pg_publication_namespace_pnnspid_pnpubid_index, 8903, PublicationNamespacePnnspidPnpubidIndexId, on pg_publication_namespace using btree(pnnspid oid_ops, pnpubid oid_ops));
+DECLARE_UNIQUE_INDEX(pg_publication_namespace_pnnspid_pnpubid_pnseq_index, 8903, PublicationNamespacePnnspidPnpubidSeqIndexId, on pg_publication_namespace using btree(pnnspid oid_ops, pnpubid oid_ops, pnsequences bool_ops));
#endif /* PG_PUBLICATION_NAMESPACE_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9fecc41954e..cd4002be291 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 1617702d9d6..7adc3710408 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3663,6 +3663,10 @@ typedef enum PublicationObjSpecType
PUBLICATIONOBJ_TABLES_IN_SCHEMA, /* All tables in schema */
PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA, /* All tables in first element of
* search_path */
+ PUBLICATIONOBJ_SEQUENCE, /* Sequence type */
+ PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA, /* Sequences in schema type */
+ PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA, /* Get the first element of
+ * search_path */
PUBLICATIONOBJ_CONTINUATION /* Continuation of previous type */
} PublicationObjSpecType;
@@ -3682,6 +3686,7 @@ typedef struct CreatePublicationStmt
List *options; /* List of DefElem nodes */
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
} CreatePublicationStmt;
typedef enum AlterPublicationAction
@@ -3705,6 +3710,7 @@ typedef struct AlterPublicationStmt
*/
List *pubobjects; /* Optional list of publication objects */
bool for_all_tables; /* Special publication for all tables in db */
+ bool for_all_sequences; /* Special publication for all sequences in db */
AlterPublicationAction action; /* What action to perform with the given
* objects */
} AlterPublicationStmt;
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 4d2c881644a..415757d8a2d 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -61,6 +61,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -118,6 +119,18 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -230,6 +243,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index eafedd610a5..f4e9f35d09d 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -29,6 +29,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4e191c120ac..92c50b13ec4 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR: conflicting or redundant options
LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
^
\dRp
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f | t | f | f | f | f
- testpub_default | regress_publication_user | f | f | t | f | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f | f | t | f | f | f | f | f
+ testpub_default | regress_publication_user | f | f | f | t | f | f | f | f
(2 rows)
-ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
+ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete, sequence');
\dRp
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f | t | f | f | f | f
- testpub_default | regress_publication_user | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f | f | t | f | f | f | f | f
+ testpub_default | regress_publication_user | f | f | t | t | t | f | t | f
(2 rows)
--- adding tables
@@ -87,10 +87,10 @@ RESET client_min_messages;
-- should be able to add schema to 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable ADD ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl1"
Tables from schemas:
@@ -99,20 +99,20 @@ Tables from schemas:
-- should be able to drop schema from 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable DROP ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl1"
-- should be able to set schema to 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable SET ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test"
@@ -134,10 +134,10 @@ ERROR: relation "testpub_nopk" is not part of the publication
-- should be able to set table to schema publication
ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
\dRp+ testpub_forschema
- Publication testpub_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test.testpub_nopk"
@@ -159,10 +159,10 @@ Publications:
"testpub_foralltables"
\dRp+ testpub_foralltables
- Publication testpub_foralltables
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t | t | t | f | f | f
+ Publication testpub_foralltables
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | t | f | t | t | f | f | f | f
(1 row)
DROP TABLE testpub_tbl2;
@@ -174,24 +174,520 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
RESET client_min_messages;
\dRp+ testpub3
- Publication testpub3
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl3"
"public.testpub_tbl3a"
\dRp+ testpub4
- Publication testpub4
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub4
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl3"
DROP TABLE testpub_tbl3, testpub_tbl3a;
DROP PUBLICATION testpub3, testpub4;
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES WITH (publish = 'sequence');
+RESET client_min_messages;
+ALTER PUBLICATION testpub_forallsequences SET (publish = 'insert, sequence');
+CREATE SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences ADD SEQUENCE testpub_seq2;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't drop from all sequences publication
+ALTER PUBLICATION testpub_forallsequences DROP SEQUENCE testpub_seq2;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences SET SEQUENCE pub_test.testpub_seq1;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't add schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences ADD ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+-- fail - can't drop schema from 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences DROP ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+-- fail - can't set schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences SET ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forsequence FOR SEQUENCE testpub_seq0;
+RESET client_min_messages;
+-- should be able to add schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence ADD ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "public.testpub_seq0"
+Sequences from schemas:
+ "pub_test"
+
+-- should be able to drop schema from 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "public.testpub_seq0"
+
+-- should be able to set schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence SET ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences from schemas:
+ "pub_test"
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forschema FOR ALL SEQUENCES IN SCHEMA pub_test;
+RESET client_min_messages;
+-- fail - can't create publication with schema and sequence of the same schema
+CREATE PUBLICATION testpub_for_seq_schema FOR ALL SEQUENCES IN SCHEMA pub_test, SEQUENCE pub_test.testpub_seq1;
+ERROR: cannot add relation "pub_test.testpub_seq1" to publication
+DETAIL: Sequence's schema "pub_test" is already part of the publication or part of the specified schema list.
+-- fail - can't add a sequence of the same schema to the schema publication
+ALTER PUBLICATION testpub_forschema ADD SEQUENCE pub_test.testpub_seq1;
+ERROR: cannot add relation "pub_test.testpub_seq1" to publication
+DETAIL: Sequence's schema "pub_test" is already part of the publication or part of the specified schema list.
+-- fail - can't drop a sequence from the schema publication which isn't in the
+-- publication
+ALTER PUBLICATION testpub_forschema DROP SEQUENCE pub_test.testpub_seq1;
+ERROR: relation "testpub_seq1" is not part of the publication
+-- should be able to set sequence to schema publication
+ALTER PUBLICATION testpub_forschema SET SEQUENCE pub_test.testpub_seq1;
+\dRp+ testpub_forschema
+ Publication testpub_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "pub_test.testpub_seq1"
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+ pubname | puballtables | puballsequences
+-------------------------+--------------+-----------------
+ testpub_forallsequences | f | t
+(1 row)
+
+\d+ pub_test.testpub_seq1
+ Sequence "pub_test.testpub_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_forallsequences"
+ "testpub_forschema"
+ "testpub_forsequence"
+
+\dRp+ testpub_forallsequences
+ Publication testpub_forallsequences
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | t | t | f | f | f | t | f
+(1 row)
+
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1, testpub_seq2;
+DROP PUBLICATION testpub_forallsequences, testpub_forsequence, testpub_forschema;
+-- Publication mixing tables and sequences
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_mix;
+RESET client_min_messages;
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE pub_test.testpub_seq2;
+ALTER PUBLICATION testpub_mix ADD SEQUENCE testpub_seq1, TABLE testpub_tbl1;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Sequences:
+ "public.testpub_seq1"
+
+ALTER PUBLICATION testpub_mix ADD ALL SEQUENCES IN SCHEMA pub_test, ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Tables from schemas:
+ "pub_test"
+Sequences:
+ "public.testpub_seq1"
+Sequences from schemas:
+ "pub_test"
+
+ALTER PUBLICATION testpub_mix DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Tables from schemas:
+ "pub_test"
+Sequences:
+ "public.testpub_seq1"
+
+ALTER PUBLICATION testpub_mix DROP ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Sequences:
+ "public.testpub_seq1"
+
+DROP PUBLICATION testpub_mix;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE pub_test.testpub_seq2;
+-- make sure we replicate only the correct relation type
+CREATE SCHEMA pub_test1;
+CREATE SEQUENCE pub_test1.test_seq1;
+CREATE TABLE pub_test1.test_tbl1 (a int primary key, b int);
+CREATE SCHEMA pub_test2;
+CREATE SEQUENCE pub_test2.test_seq2;
+CREATE TABLE pub_test2.test_tbl2 (a int primary key, b int);
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_schemas;
+RESET client_min_messages;
+-- add tables from one schema, sequences from the other
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test2"
+Sequences from schemas:
+ "pub_test1"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- add the other object type from each schema
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test1;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test2;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+ "pub_test2"
+Sequences from schemas:
+ "pub_test1"
+ "pub_test2"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+ "testpub_schemas" (tables)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+ "testpub_schemas" (sequences)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- now drop the object type added first
+ALTER PUBLICATION testpub_schemas DROP ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas DROP ALL SEQUENCES IN SCHEMA pub_test1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+Sequences from schemas:
+ "pub_test2"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+
+-- should fail (publication contains the whole schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test1.test_tbl1;
+ERROR: cannot add relation "pub_test1.test_tbl1" to publication
+DETAIL: Table's schema "pub_test1" is already part of the publication or part of the specified schema list.
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test2.test_seq2;
+ERROR: cannot add relation "pub_test2.test_seq2" to publication
+DETAIL: Sequence's schema "pub_test2" is already part of the publication or part of the specified schema list.
+-- should work (different schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test1.test_seq1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "pub_test2.test_tbl2"
+Tables from schemas:
+ "pub_test1"
+Sequences:
+ "pub_test1.test_seq1"
+Sequences from schemas:
+ "pub_test2"
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- now drop the explicitly added objects again
+ALTER PUBLICATION testpub_schemas DROP TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas DROP SEQUENCE pub_test1.test_seq1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+Sequences from schemas:
+ "pub_test2"
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+
+DROP PUBLICATION testpub_schemas;
+DROP TABLE pub_test1.test_tbl1, pub_test2.test_tbl2;
+DROP SEQUENCE pub_test1.test_seq1, pub_test2.test_seq2;
+DROP SCHEMA pub_test1, pub_test2;
-- Tests for partitioned tables
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_forparted;
@@ -207,10 +703,10 @@ UPDATE testpub_parted1 SET a = 1;
-- only parent is listed as being in publication, not the partition
ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
\dRp+ testpub_forparted
- Publication testpub_forparted
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forparted
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_parted"
@@ -223,10 +719,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
UPDATE testpub_parted1 SET a = 1;
ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
\dRp+ testpub_forparted
- Publication testpub_forparted
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | t
+ Publication testpub_forparted
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | t
Tables:
"public.testpub_parted"
@@ -255,10 +751,10 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -271,10 +767,10 @@ Tables:
ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -290,10 +786,10 @@ Publications:
ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -301,10 +797,10 @@ Tables:
-- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
@@ -337,10 +833,10 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub_syntax1
- Publication testpub_syntax1
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub_syntax1
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl3" WHERE (e < 999)
@@ -350,10 +846,10 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub_syntax2
- Publication testpub_syntax2
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub_syntax2
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -658,10 +1154,10 @@ ERROR: relation "testpub_tbl1" is already member of publication "testpub_fortbl
CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
ERROR: publication "testpub_fortbl" already exists
\dRp+ testpub_fortbl
- Publication testpub_fortbl
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortbl
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test.testpub_nopk"
"public.testpub_tbl1"
@@ -699,10 +1195,10 @@ Publications:
"testpub_fortbl"
\dRp+ testpub_default
- Publication testpub_default
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | f | f
+ Publication testpub_default
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | f | t | f
Tables:
"pub_test.testpub_nopk"
"public.testpub_tbl1"
@@ -780,10 +1276,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
DROP TABLE testpub_parted;
DROP TABLE testpub_tbl1;
\dRp+ testpub_default
- Publication testpub_default
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | f | f
+ Publication testpub_default
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | f | t | f
(1 row)
-- fail - must be owner of publication
@@ -793,20 +1289,20 @@ ERROR: must be owner of publication testpub_default
RESET ROLE;
ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
\dRp testpub_foo
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpub_foo | regress_publication_user | f | f | t | t | t | f | t | f
(1 row)
-- rename back to keep the rest simple
ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
\dRp testpub_default
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpub_default | regress_publication_user2 | f | f | t | t | t | f | t | f
(1 row)
-- adding schemas and tables
@@ -822,19 +1318,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub1_forschema FOR ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
CREATE PUBLICATION testpub2_forschema FOR ALL TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -848,44 +1344,44 @@ CREATE PUBLICATION testpub6_forschema FOR ALL TABLES IN SCHEMA "CURRENT_SCHEMA",
CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
RESET client_min_messages;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"public"
\dRp+ testpub4_forschema
- Publication testpub4_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub4_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
\dRp+ testpub5_forschema
- Publication testpub5_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub5_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
"public"
\dRp+ testpub6_forschema
- Publication testpub6_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub6_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
"public"
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"CURRENT_SCHEMA.CURRENT_SCHEMA"
@@ -919,10 +1415,10 @@ ERROR: schema "testpub_view" does not exist
-- dropping the schema should reflect the change in publication
DROP SCHEMA pub_test3;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -930,20 +1426,20 @@ Tables from schemas:
-- renaming the schema should reflect the change in publication
ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1_renamed"
"pub_test2"
ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -951,10 +1447,10 @@ Tables from schemas:
-- alter publication add schema
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -963,10 +1459,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -975,10 +1471,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA pub_test1;
ERROR: schema "pub_test1" is already member of publication "testpub1_forschema"
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -986,10 +1482,10 @@ Tables from schemas:
-- alter publication drop schema
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -997,10 +1493,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test2;
ERROR: tables from schema "pub_test2" are not part of the publication
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1008,29 +1504,29 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
-- drop all schemas
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
(1 row)
-- alter publication set multiple schema
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA pub_test1, pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1039,10 +1535,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1051,10 +1547,10 @@ Tables from schemas:
-- removing the duplicate schemas
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA pub_test1, pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1124,18 +1620,18 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub3_forschema;
RESET client_min_messages;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
(1 row)
ALTER PUBLICATION testpub3_forschema SET ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1145,20 +1641,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR ALL TABLES IN SCHEMA pub_test1
CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, ALL TABLES IN SCHEMA pub_test1;
RESET client_min_messages;
\dRp+ testpub_forschema_fortable
- Publication testpub_forschema_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forschema_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test2.tbl1"
Tables from schemas:
"pub_test1"
\dRp+ testpub_fortable_forschema
- Publication testpub_fortable_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test2.tbl1"
Tables from schemas:
@@ -1202,40 +1698,85 @@ CREATE SCHEMA sch1;
CREATE SCHEMA sch2;
CREATE TABLE sch1.tbl1 (a int) PARTITION BY RANGE(a);
CREATE TABLE sch2.tbl1_part1 PARTITION OF sch1.tbl1 FOR VALUES FROM (1) to (10);
+CREATE SEQUENCE sch1.seq1;
+CREATE SEQUENCE sch2.seq2;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+-----------
pub | sch1 | tbl1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
DROP PUBLICATION pub;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+(1 row)
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
@@ -1245,14 +1786,26 @@ SELECT * FROM pg_publication_tables;
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+(0 rows)
+
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
DROP TABLE sch2.tbl1_part1;
DROP TABLE sch1.tbl1;
@@ -1268,9 +1821,81 @@ SELECT * FROM pg_publication_tables;
pub | sch1 | tbl1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+(0 rows)
+
+DROP PUBLICATION pub;
+-- Schema publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR ALL SEQUENCES IN SCHEMA sch2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
+ALTER PUBLICATION pub DROP SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
RESET client_min_messages;
DROP PUBLICATION pub;
DROP TABLE sch1.tbl1;
+DROP SEQUENCE sch1.seq1, sch2.seq2;
DROP SCHEMA sch1 cascade;
DROP SCHEMA sch2 cascade;
RESET SESSION AUTHORIZATION;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index ac468568a1a..ef328d356bb 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1429,6 +1429,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gps.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 5457c56b33f..5043c4bbba8 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -27,7 +27,7 @@ CREATE PUBLICATION testpub_xxx WITH (publish_via_partition_root = 'true', publis
\dRp
-ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
+ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete, sequence');
\dRp
@@ -104,6 +104,179 @@ RESET client_min_messages;
DROP TABLE testpub_tbl3, testpub_tbl3a;
DROP PUBLICATION testpub3, testpub4;
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES WITH (publish = 'sequence');
+RESET client_min_messages;
+ALTER PUBLICATION testpub_forallsequences SET (publish = 'insert, sequence');
+
+CREATE SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences ADD SEQUENCE testpub_seq2;
+-- fail - can't drop from all sequences publication
+ALTER PUBLICATION testpub_forallsequences DROP SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences SET SEQUENCE pub_test.testpub_seq1;
+
+-- fail - can't add schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences ADD ALL SEQUENCES IN SCHEMA pub_test;
+-- fail - can't drop schema from 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences DROP ALL SEQUENCES IN SCHEMA pub_test;
+-- fail - can't set schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences SET ALL SEQUENCES IN SCHEMA pub_test;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forsequence FOR SEQUENCE testpub_seq0;
+RESET client_min_messages;
+-- should be able to add schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence ADD ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+-- should be able to drop schema from 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+-- should be able to set schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence SET ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forschema FOR ALL SEQUENCES IN SCHEMA pub_test;
+RESET client_min_messages;
+-- fail - can't create publication with schema and sequence of the same schema
+CREATE PUBLICATION testpub_for_seq_schema FOR ALL SEQUENCES IN SCHEMA pub_test, SEQUENCE pub_test.testpub_seq1;
+-- fail - can't add a sequence of the same schema to the schema publication
+ALTER PUBLICATION testpub_forschema ADD SEQUENCE pub_test.testpub_seq1;
+-- fail - can't drop a sequence from the schema publication which isn't in the
+-- publication
+ALTER PUBLICATION testpub_forschema DROP SEQUENCE pub_test.testpub_seq1;
+-- should be able to set sequence to schema publication
+ALTER PUBLICATION testpub_forschema SET SEQUENCE pub_test.testpub_seq1;
+\dRp+ testpub_forschema
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+\d+ pub_test.testpub_seq1
+\dRp+ testpub_forallsequences
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1, testpub_seq2;
+DROP PUBLICATION testpub_forallsequences, testpub_forsequence, testpub_forschema;
+
+-- Publication mixing tables and sequences
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_mix;
+RESET client_min_messages;
+
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE pub_test.testpub_seq2;
+
+ALTER PUBLICATION testpub_mix ADD SEQUENCE testpub_seq1, TABLE testpub_tbl1;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix ADD ALL SEQUENCES IN SCHEMA pub_test, ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix DROP ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+DROP PUBLICATION testpub_mix;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE pub_test.testpub_seq2;
+
+
+-- make sure we replicate only the correct relation type
+CREATE SCHEMA pub_test1;
+CREATE SEQUENCE pub_test1.test_seq1;
+CREATE TABLE pub_test1.test_tbl1 (a int primary key, b int);
+
+CREATE SCHEMA pub_test2;
+CREATE SEQUENCE pub_test2.test_seq2;
+CREATE TABLE pub_test2.test_tbl2 (a int primary key, b int);
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_schemas;
+RESET client_min_messages;
+
+-- add tables from one schema, sequences from the other
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test1;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- add the other object type from each schema
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test1;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test2;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- now drop the object type added first
+ALTER PUBLICATION testpub_schemas DROP ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas DROP ALL SEQUENCES IN SCHEMA pub_test1;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- should fail (publication contains the whole schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test1.test_tbl1;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test2.test_seq2;
+
+-- should work (different schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test1.test_seq1;
+
+\dRp+ testpub_schemas
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- now drop the explicitly added objects again
+ALTER PUBLICATION testpub_schemas DROP TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas DROP SEQUENCE pub_test1.test_seq1;
+
+\dRp+ testpub_schemas
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+DROP PUBLICATION testpub_schemas;
+DROP TABLE pub_test1.test_tbl1, pub_test2.test_tbl2;
+DROP SEQUENCE pub_test1.test_seq1, pub_test2.test_seq2;
+DROP SCHEMA pub_test1, pub_test2;
+
-- Tests for partitioned tables
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_forparted;
@@ -717,32 +890,51 @@ CREATE SCHEMA sch1;
CREATE SCHEMA sch2;
CREATE TABLE sch1.tbl1 (a int) PARTITION BY RANGE(a);
CREATE TABLE sch2.tbl1_part1 PARTITION OF sch1.tbl1 FOR VALUES FROM (1) to (10);
+CREATE SEQUENCE sch1.seq1;
+CREATE SEQUENCE sch2.seq2;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
DROP TABLE sch2.tbl1_part1;
@@ -755,10 +947,36 @@ CREATE TABLE sch1.tbl1_part3 (a int) PARTITION BY RANGE(a);
ALTER TABLE sch1.tbl1 ATTACH PARTITION sch1.tbl1_part3 FOR VALUES FROM (20) to (30);
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Schema publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR ALL SEQUENCES IN SCHEMA sch2;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub DROP SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
RESET client_min_messages;
DROP PUBLICATION pub;
DROP TABLE sch1.tbl1;
+DROP SEQUENCE sch1.seq1, sch2.seq2;
DROP SCHEMA sch1 cascade;
DROP SCHEMA sch2 cascade;
diff --git a/src/test/subscription/t/029_sequences.pl b/src/test/subscription/t/029_sequences.pl
new file mode 100644
index 00000000000..9ae3c03d7d1
--- /dev/null
+++ b/src/test/subscription/t/029_sequences.pl
@@ -0,0 +1,202 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'initial test data replicated');
+
+
+# advance the sequence in a rolled-back transaction - the rollback
+# does not wait for the replication, so we could see any intermediate state
+# so do something else after the test, to ensure we wait for everything
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+ INSERT INTO seq_test VALUES (-1);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'advance sequence in rolled-back transaction');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'create new sequence and roll it back');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'create sequence, advance it in rolled-back transaction, but commit the create');
+
+
+# advance the new sequence in a transaction, and roll it back - the rollback
+# does not wait for the replication, so we could see any intermediate state
+# so do something else after the test, to ensure we wait for everything
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+ INSERT INTO seq_test VALUES (-1);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '231|0|t',
+ 'advance the new sequence in a transaction and roll it back');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'advance sequence in a subtransaction');
+
+
+done_testing();
--
2.34.1
Hi,
Attached is a rebased patch, addressing most of the remaining issues.
The main improvements are:
1) pg_publication_namespace.pntype and type checks
Originally, the patch used pnsequences flag to distinguish which entries
added by FOR ALL TABLES IN SCHEMA and FOR ALL SEQUENCES IN SCHEMA. I've
decided to replace this with a simple char column, called pntype, where
't' means tables and 's' sequences. As explained before, relkind doesn't
work well because of partitioned tables. A char, with a function to
match it to relkind values works fairly well.
I've revisited the question how to represent publications publishing the
same schema twice - once for tables, once for sequences. There were
proposals to represent this with a single row, i.e. turn pntype into an
array of char values. So it'd be either ['t'], ['s'] or ['s', 't']. I
spent some time working on that, but I've decided to keep the current
approach with two separate rows - it's easier to manage, lookup etc.
2) pg_get_object_address
I've updated the objectaddress code to consider pntype when looking-up
the pntype value, so each row in pg_publication_namespace gets the
correct ObjectAddress.
3) for all [tables | sequences]
The original patch did not allow creating publication for all tables and
all sequences at the same time. I've tweaked the grammar to allow this:
CREATE PUBLICATION p FOR ALL list_of_types;
where list_of_types is arbitrary combination of TABLES and SEQUENCES.
It's implemented in a slightly awkward way - partially in the grammar,
partially in the publicationcmds.c. I suspect there's a (cleaner) way to
do this entirely in the grammar but I haven't succeeded yet.
4) prevent 'ADD TABLE sequence' and 'ADD SEQUENCE table'
It was possible to do "ADD TABLE" and pass it a sequence, which would
fail to notice if the publication already includes all sequences from
the schema. I've added a check preventing that (and a similar one for
ADD SEQUENCE).
5) missing block in AlterTableNamespace to cross-check moving published
sequence to already published schema
A block of code was missing from AlterTableNamespace, checking that
we're not moving a sequence into a schema that is already published (all
the sequences from it).
6) a couple comment fixes
Various comment improvements and fixes. At this point there's a couple
trivial FIXME/XXX comments remaining.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Add-support-for-decoding-sequences-to-built-20220320.patchtext/x-patch; charset=UTF-8; name=0001-Add-support-for-decoding-sequences-to-built-20220320.patchDownload
From d510e26b8aab698670627b192607c6a1ec9d6c8e Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Sat, 19 Mar 2022 20:15:06 +0100
Subject: [PATCH] Add support for decoding sequences to built-in replication
---
doc/src/sgml/catalogs.sgml | 81 ++
doc/src/sgml/protocol.sgml | 119 ++
doc/src/sgml/ref/alter_publication.sgml | 24 +-
doc/src/sgml/ref/alter_subscription.sgml | 3 +-
doc/src/sgml/ref/create_publication.sgml | 45 +-
src/backend/catalog/objectaddress.c | 44 +-
src/backend/catalog/pg_publication.c | 321 +++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 426 +++++--
src/backend/commands/sequence.c | 79 ++
src/backend/commands/subscriptioncmds.c | 101 +-
src/backend/commands/tablecmds.c | 27 +-
src/backend/executor/execReplication.c | 4 +-
src/backend/nodes/copyfuncs.c | 4 +-
src/backend/nodes/equalfuncs.c | 4 +-
src/backend/parser/gram.y | 52 +-
src/backend/replication/logical/proto.c | 52 +
src/backend/replication/logical/tablesync.c | 110 +-
src/backend/replication/logical/worker.c | 56 +
src/backend/replication/pgoutput/pgoutput.c | 79 +-
src/backend/utils/cache/relcache.c | 28 +-
src/backend/utils/cache/syscache.c | 6 +-
src/bin/pg_dump/pg_dump.c | 65 +-
src/bin/pg_dump/pg_dump.h | 3 +
src/bin/pg_dump/t/002_pg_dump.pl | 40 +-
src/bin/psql/describe.c | 287 +++--
src/bin/psql/tab-complete.c | 12 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 26 +-
.../catalog/pg_publication_namespace.h | 10 +-
src/include/commands/sequence.h | 1 +
src/include/nodes/parsenodes.h | 8 +-
src/include/replication/logicalproto.h | 19 +
src/include/replication/pgoutput.h | 1 +
src/test/regress/expected/object_address.out | 10 +-
src/test/regress/expected/publication.out | 1009 ++++++++++++++---
src/test/regress/expected/rules.out | 8 +
src/test/regress/sql/object_address.sql | 5 +-
src/test/regress/sql/publication.sql | 226 +++-
src/test/subscription/t/030_sequences.pl | 202 ++++
40 files changed, 3155 insertions(+), 457 deletions(-)
create mode 100644 src/test/subscription/t/030_sequences.pl
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 4dc5b34d21c..69c97894066 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6281,6 +6281,16 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
Reference to schema
</para></entry>
</row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pntype</structfield> <type>char</type>
+ Determines which object type is included from this schema.
+ </para>
+ <para>
+ Reference to schema
+ </para></entry>
+ </row>
</tbody>
</tgroup>
</table>
@@ -9588,6 +9598,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11423,6 +11438,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index 9178c779ba9..4507c4cb791 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -7055,6 +7055,125 @@ Relation
</listitem>
</varlistentry>
+<varlistentry id="protocol-logicalrep-message-formats-Sequence">
+<term>
+Sequence
+</term>
+<listitem>
+<para>
+
+<variablelist>
+<varlistentry>
+<term>
+ Byte1('X')
+</term>
+<listitem>
+<para>
+ Identifies the message as a sequence message.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ Int32 (TransactionId)
+</term>
+<listitem>
+<para>
+ Xid of the transaction (only present for streamed transactions).
+ This field is available since protocol version 2.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ Int8(0)
+</term>
+<listitem>
+<para>
+ Flags; currently unused.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ Int64 (XLogRecPtr)
+</term>
+<listitem>
+<para>
+ The LSN of the sequence increment.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ String
+</term>
+<listitem>
+<para>
+ Namespace (empty string for <literal>pg_catalog</literal>).
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ String
+</term>
+<listitem>
+<para>
+ Relation name.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int8
+</term>
+<listitem>
+<para>
+ 1 if the sequence update is transactions, 0 otherwise.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int64
+</term>
+<listitem>
+<para>
+ <structfield>last_value</structfield> value of the sequence.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int64
+</term>
+<listitem>
+<para>
+ <structfield>log_cnt</structfield> value of the sequence.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int8
+</term>
+<listitem>
+<para>
+ <structfield>is_called</structfield> value of the sequence.
+</para>
+</listitem>
+</varlistentry>
+
+</variablelist>
+</para>
+</listitem>
+</varlistentry>
+
<varlistentry id="protocol-logicalrep-message-formats-Type">
<term>
Type
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index 32b75f6c78e..5dacb732b69 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -58,7 +60,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -122,6 +135,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><replaceable class="parameter">schema_name</replaceable></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 58b78a94eab..71e918e8e59 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -147,7 +147,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ replication of tables and sequences that were added to the subscribed-to publications
since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -168,6 +168,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<para>
Previously subscribed tables are not copied, even if a table's row
filter <literal>WHERE</literal> clause has since been modified.
+ Previously-subscribed sequences are not copied either.
</para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 4979b9b646d..f355549966d 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -23,13 +23,16 @@ PostgreSQL documentation
<synopsis>
CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
[ FOR ALL TABLES
+ | FOR ALL SEQUENCES
| FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
[ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -107,27 +110,43 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>FOR SEQUENCE</literal></term>
+ <listitem>
+ <para>
+ Specifies a list of sequences to add to the publication.
+ </para>
+
+ <para>
+ Specifying a sequence that is part of a schema specified by <literal>FOR
+ ALL SEQUENCES IN SCHEMA</literal> is not supported.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>FOR ALL TABLES</literal></term>
+ <term><literal>FOR ALL SEQUENCES</literal></term>
<listitem>
<para>
- Marks the publication as one that replicates changes for all tables in
- the database, including tables created in the future.
+ Marks the publication as one that replicates changes for all tables/sequences in
+ the database, including tables/sequences created in the future.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>FOR ALL TABLES IN SCHEMA</literal></term>
+ <term><literal>FOR ALL SEQUENCES IN SCHEMA</literal></term>
<listitem>
<para>
- Marks the publication as one that replicates changes for all tables in
- the specified list of schemas, including tables created in the future.
+ Marks the publication as one that replicates changes for all sequences/tables in
+ the specified list of schemas, including sequences/tables created in the future.
</para>
<para>
- Specifying a schema along with a table which belongs to the specified
- schema using <literal>FOR TABLE</literal> is not supported.
+ Specifying a schema along with a sequence/table which belongs to the specified
+ schema using <literal>FOR SEQUENCE</literal>/<literal>FOR TABLE</literal> is not supported.
</para>
<para>
@@ -202,10 +221,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
<title>Notes</title>
<para>
- If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
- <literal>FOR ALL TABLES IN SCHEMA</literal> are not specified, then the
- publication starts out with an empty set of tables. That is useful if
- tables or schemas are to be added later.
+ If <literal>FOR TABLE</literal>, <literal>FOR SEQUENCE</literal>, etc. is
+ not specified, then the publication starts out with an empty set of tables
+ and sequences. That is useful if objects are to be added later.
</para>
<para>
@@ -220,10 +238,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
</para>
<para>
- To add a table to a publication, the invoking user must have ownership
- rights on the table. The <command>FOR ALL TABLES</command> and
- <command>FOR ALL TABLES IN SCHEMA</command> clauses require the invoking
- user to be a superuser.
+ To add a table or a sequence to a publication, the invoking user must
+ have ownership rights on the object. The <command>FOR ALL ...</command>
+ clauses require the invoking user to be a superuser.
</para>
<para>
diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c
index f30c742d48f..1a2512c8aa8 100644
--- a/src/backend/catalog/objectaddress.c
+++ b/src/backend/catalog/objectaddress.c
@@ -1958,12 +1958,14 @@ get_object_address_publication_schema(List *object, bool missing_ok)
char *pubname;
char *schemaname;
Oid schemaid;
+ char *objtype;
ObjectAddressSet(address, PublicationNamespaceRelationId, InvalidOid);
/* Fetch schema name and publication name from input list */
schemaname = strVal(linitial(object));
pubname = strVal(lsecond(object));
+ objtype = strVal(lthird(object));
schemaid = get_namespace_oid(schemaname, missing_ok);
if (!OidIsValid(schemaid))
@@ -1976,10 +1978,12 @@ get_object_address_publication_schema(List *object, bool missing_ok)
/* Find the publication schema mapping in syscache */
address.objectId =
- GetSysCacheOid2(PUBLICATIONNAMESPACEMAP,
+ GetSysCacheOid3(PUBLICATIONNAMESPACEMAP,
Anum_pg_publication_namespace_oid,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pub->oid));
+ ObjectIdGetDatum(pub->oid),
+ CharGetDatum(objtype[0]));
+
if (!OidIsValid(address.objectId) && !missing_ok)
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
@@ -2260,7 +2264,6 @@ pg_get_object_address(PG_FUNCTION_ARGS)
case OBJECT_DOMCONSTRAINT:
case OBJECT_CAST:
case OBJECT_USER_MAPPING:
- case OBJECT_PUBLICATION_NAMESPACE:
case OBJECT_PUBLICATION_REL:
case OBJECT_DEFACL:
case OBJECT_TRANSFORM:
@@ -2285,6 +2288,7 @@ pg_get_object_address(PG_FUNCTION_ARGS)
/* fall through to check args length */
/* FALLTHROUGH */
case OBJECT_OPERATOR:
+ case OBJECT_PUBLICATION_NAMESPACE:
if (list_length(args) != 2)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -2355,6 +2359,8 @@ pg_get_object_address(PG_FUNCTION_ARGS)
objnode = (Node *) list_make2(name, linitial(args));
break;
case OBJECT_PUBLICATION_NAMESPACE:
+ objnode = (Node *) list_make3(linitial(name), linitial(args), lsecond(args));
+ break;
case OBJECT_USER_MAPPING:
objnode = (Node *) list_make2(linitial(name), linitial(args));
break;
@@ -2909,11 +2915,12 @@ get_catalog_object_by_oid(Relation catalog, AttrNumber oidcol, Oid objectId)
*
* Get publication name and schema name from the object address into pubname and
* nspname. Both pubname and nspname are palloc'd strings which will be freed by
- * the caller.
+ * the caller. The last parameter specifies which object type is included from
+ * the schema.
*/
static bool
getPublicationSchemaInfo(const ObjectAddress *object, bool missing_ok,
- char **pubname, char **nspname)
+ char **pubname, char **nspname, char **objtype)
{
HeapTuple tup;
Form_pg_publication_namespace pnform;
@@ -2949,6 +2956,13 @@ getPublicationSchemaInfo(const ObjectAddress *object, bool missing_ok,
return false;
}
+ /*
+ * The type is always a single character, but we need to pass it as a string,
+ * so allocate two charaters and set the first one. The second one is \0.
+ */
+ *objtype = palloc0(2);
+ *objtype[0] = pnform->pntype;
+
ReleaseSysCache(tup);
return true;
}
@@ -3981,15 +3995,17 @@ getObjectDescription(const ObjectAddress *object, bool missing_ok)
{
char *pubname;
char *nspname;
+ char *objtype;
if (!getPublicationSchemaInfo(object, missing_ok,
- &pubname, &nspname))
+ &pubname, &nspname, &objtype))
break;
- appendStringInfo(&buffer, _("publication of schema %s in publication %s"),
- nspname, pubname);
+ appendStringInfo(&buffer, _("publication of schema %s in publication %s type %s"),
+ nspname, pubname, objtype);
pfree(pubname);
pfree(nspname);
+ pfree(objtype);
break;
}
@@ -5812,18 +5828,24 @@ getObjectIdentityParts(const ObjectAddress *object,
{
char *pubname;
char *nspname;
+ char *objtype;
if (!getPublicationSchemaInfo(object, missing_ok, &pubname,
- &nspname))
+ &nspname, &objtype))
break;
- appendStringInfo(&buffer, "%s in publication %s",
- nspname, pubname);
+ appendStringInfo(&buffer, "%s in publication %s type %s",
+ nspname, pubname, objtype);
if (objargs)
*objargs = list_make1(pubname);
else
pfree(pubname);
+ if (objargs)
+ *objargs = lappend(*objargs, objtype);
+ else
+ pfree(objtype);
+
if (objname)
*objname = list_make1(nspname);
else
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 789b895db89..144d1ada9e2 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -54,7 +54,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -131,7 +132,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -176,6 +178,47 @@ filter_partitions(List *relids)
return result;
}
+/* Check the character is a valid object type for schema publication. */
+static void
+AssertObjectTypeValid(char objectType)
+{
+#ifdef USE_ASSERT_CHECKING
+ Assert(objectType == PUB_OBJTYPE_SEQUENCE || objectType == PUB_OBJTYPE_TABLE);
+#endif
+}
+
+/*
+ * Determine object type given the object type set for a schema.
+ */
+char
+pub_get_object_type_for_relkind(char relkind)
+{
+ /* sequence maps directly to sequence relkind */
+ if (relkind == RELKIND_SEQUENCE)
+ return PUB_OBJTYPE_SEQUENCE;
+
+ /* for table, we match either regular or partitioned table */
+ if (relkind == RELKIND_RELATION ||
+ relkind == RELKIND_PARTITIONED_TABLE)
+ return PUB_OBJTYPE_TABLE;
+
+ return PUB_OBJTYPE_UNSUPPORTED;
+}
+
+/*
+ * Determine if publication object type matches the relkind.
+ *
+ * Returns true if the relation matches object type replicated by this schema,
+ * false otherwise.
+ */
+static bool
+pub_object_type_matches_relkind(char objectType, char relkind)
+{
+ AssertObjectTypeValid(objectType);
+
+ return (pub_get_object_type_for_relkind(relkind) == objectType);
+}
+
/*
* Another variant of this, taking a Relation.
*/
@@ -205,7 +248,7 @@ is_schema_publication(Oid pubid)
ObjectIdGetDatum(pubid));
scan = systable_beginscan(pubschsrel,
- PublicationNamespacePnnspidPnpubidIndexId,
+ PublicationNamespacePnnspidPnpubidPntypeIndexId,
true, NULL, 1, &scankey);
tup = systable_getnext(scan);
result = HeapTupleIsValid(tup);
@@ -313,7 +356,9 @@ GetTopMostAncestorInPublication(Oid puboid, List *ancestors, int *ancestor_level
}
else
{
- aschemaPubids = GetSchemaPublications(get_rel_namespace(ancestor));
+ /* we only search for ancestors of tables, so PUB_OBJTYPE_TABLE */
+ aschemaPubids = GetSchemaPublications(get_rel_namespace(ancestor),
+ PUB_OBJTYPE_TABLE);
if (list_member_oid(aschemaPubids, puboid))
{
topmost_relid = ancestor;
@@ -436,7 +481,7 @@ publication_add_relation(Oid pubid, PublicationRelInfo *pri,
* Insert new publication / schema mapping.
*/
ObjectAddress
-publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
+publication_add_schema(Oid pubid, Oid schemaid, char objectType, bool if_not_exists)
{
Relation rel;
HeapTuple tup;
@@ -448,6 +493,8 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
ObjectAddress myself,
referenced;
+ AssertObjectTypeValid(objectType);
+
rel = table_open(PublicationNamespaceRelationId, RowExclusiveLock);
/*
@@ -455,9 +502,10 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
* duplicates, it's here just to provide nicer error message in common
* case. The real protection is the unique key on the catalog.
*/
- if (SearchSysCacheExists2(PUBLICATIONNAMESPACEMAP,
+ if (SearchSysCacheExists3(PUBLICATIONNAMESPACEMAP,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pubid)))
+ ObjectIdGetDatum(pubid),
+ CharGetDatum(objectType)))
{
table_close(rel, RowExclusiveLock);
@@ -483,6 +531,8 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
ObjectIdGetDatum(pubid);
values[Anum_pg_publication_namespace_pnnspid - 1] =
ObjectIdGetDatum(schemaid);
+ values[Anum_pg_publication_namespace_pntype - 1] =
+ CharGetDatum(objectType);
tup = heap_form_tuple(RelationGetDescr(rel), values, nulls);
@@ -508,7 +558,7 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
* publication_add_relation for why we need to consider all the
* partitions.
*/
- schemaRels = GetSchemaPublicationRelations(schemaid,
+ schemaRels = GetSchemaPublicationRelations(schemaid, objectType,
PUBLICATION_PART_ALL);
InvalidatePublicationRels(schemaRels);
@@ -542,11 +592,14 @@ GetRelationPublications(Oid relid)
/*
* Gets list of relation oids for a publication.
*
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE / FOR SEQUENCE publications, the FOR
+ * ALL TABLES / SEQUENCES should use GetAllTablesPublicationRelations()
+ * and GetAllSequencesPublicationRelations().
+ *
+ * XXX pub_partopt only matters for tables, not sequences.
*/
List *
-GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
+GetPublicationRelations(Oid pubid, char objectType, PublicationPartOpt pub_partopt)
{
List *result;
Relation pubrelsrel;
@@ -554,6 +607,8 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
SysScanDesc scan;
HeapTuple tup;
+ AssertObjectTypeValid(objectType);
+
/* Find all publications associated with the relation. */
pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
@@ -568,11 +623,29 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
result = NIL;
while (HeapTupleIsValid(tup = systable_getnext(scan)))
{
+ char relkind;
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
- result = GetPubPartitionOptionRelations(result, pub_partopt,
- pubrel->prrelid);
+ relkind = get_rel_relkind(pubrel->prrelid);
+
+ /*
+ * If the relkind does not match the requested object type, ignore the
+ * relation. For example we might be interested only in sequences, so
+ * we ignore tables.
+ */
+ if (!pub_object_type_matches_relkind(objectType, relkind))
+ continue;
+
+ /*
+ * We don't have partitioned sequences, so just add them to the list.
+ * Otherwise consider adding all child relations, if requested.
+ */
+ if (relkind == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ else
+ result = GetPubPartitionOptionRelations(result, pub_partopt,
+ pubrel->prrelid);
}
systable_endscan(scan);
@@ -622,6 +695,43 @@ GetAllTablesPublications(void)
return result;
}
+/*
+ * Gets list of publication oids for publications marked as FOR ALL SEQUENCES.
+ */
+List *
+GetAllSequencesPublications(void)
+{
+ List *result;
+ Relation rel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications that are marked as for all sequences. */
+ rel = table_open(PublicationRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_puballsequences,
+ BTEqualStrategyNumber, F_BOOLEQ,
+ BoolGetDatum(true));
+
+ scan = systable_beginscan(rel, InvalidOid, false,
+ NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Oid oid = ((Form_pg_publication) GETSTRUCT(tup))->oid;
+
+ result = lappend_oid(result, oid);
+ }
+
+ systable_endscan(scan);
+ table_close(rel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of all relation published by FOR ALL TABLES publication(s).
*
@@ -688,28 +798,38 @@ GetAllTablesPublicationRelations(bool pubviaroot)
/*
* Gets the list of schema oids for a publication.
*
- * This should only be used FOR ALL TABLES IN SCHEMA publications.
+ * This should only be used FOR ALL TABLES IN SCHEMA and FOR ALL SEQUENCES
+ * publications.
+ *
+ * 'objectType' determines whether to get FOR TABLE or FOR SEQUENCES schemas
*/
List *
-GetPublicationSchemas(Oid pubid)
+GetPublicationSchemas(Oid pubid, char objectType)
{
List *result = NIL;
Relation pubschsrel;
- ScanKeyData scankey;
+ ScanKeyData scankey[2];
SysScanDesc scan;
HeapTuple tup;
+ AssertObjectTypeValid(objectType);
+
/* Find all schemas associated with the publication */
pubschsrel = table_open(PublicationNamespaceRelationId, AccessShareLock);
- ScanKeyInit(&scankey,
+ ScanKeyInit(&scankey[0],
Anum_pg_publication_namespace_pnpubid,
BTEqualStrategyNumber, F_OIDEQ,
ObjectIdGetDatum(pubid));
+ ScanKeyInit(&scankey[1],
+ Anum_pg_publication_namespace_pntype,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(objectType));
+
scan = systable_beginscan(pubschsrel,
- PublicationNamespacePnnspidPnpubidIndexId,
- true, NULL, 1, &scankey);
+ PublicationNamespacePnnspidPnpubidPntypeIndexId,
+ true, NULL, 2, scankey);
while (HeapTupleIsValid(tup = systable_getnext(scan)))
{
Form_pg_publication_namespace pubsch;
@@ -727,14 +847,26 @@ GetPublicationSchemas(Oid pubid)
/*
* Gets the list of publication oids associated with a specified schema.
+ *
+ * objectType specifies whether we're looking for schemas including tables or
+ * sequences.
+ *
+ * Note: relcache calls this for all object types, not just tables and sequences.
+ * Which is why we handle the PUB_OBJTYPE_UNSUPPORTED object type too.
*/
List *
-GetSchemaPublications(Oid schemaid)
+GetSchemaPublications(Oid schemaid, char objectType)
{
List *result = NIL;
CatCList *pubschlist;
int i;
+ /* unsupported object type */
+ if (objectType == PUB_OBJTYPE_UNSUPPORTED)
+ return result;
+
+ AssertObjectTypeValid(objectType);
+
/* Find all publications associated with the schema */
pubschlist = SearchSysCacheList1(PUBLICATIONNAMESPACEMAP,
ObjectIdGetDatum(schemaid));
@@ -742,6 +874,11 @@ GetSchemaPublications(Oid schemaid)
{
HeapTuple tup = &pubschlist->members[i]->tuple;
Oid pubid = ((Form_pg_publication_namespace) GETSTRUCT(tup))->pnpubid;
+ char pntype = ((Form_pg_publication_namespace) GETSTRUCT(tup))->pntype;
+
+ /* Skip schemas publishing a different object type. */
+ if (pntype != objectType)
+ continue;
result = lappend_oid(result, pubid);
}
@@ -753,9 +890,13 @@ GetSchemaPublications(Oid schemaid)
/*
* Get the list of publishable relation oids for a specified schema.
+ *
+ * objectType specifies whether this is FOR ALL TABLES IN SCHEMA or FOR ALL
+ * SEQUENCES IN SCHEMA
*/
List *
-GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
+GetSchemaPublicationRelations(Oid schemaid, char objectType,
+ PublicationPartOpt pub_partopt)
{
Relation classRel;
ScanKeyData key[1];
@@ -764,6 +905,7 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
List *result = NIL;
Assert(OidIsValid(schemaid));
+ AssertObjectTypeValid(objectType);
classRel = table_open(RelationRelationId, AccessShareLock);
@@ -784,9 +926,16 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
continue;
relkind = get_rel_relkind(relid);
- if (relkind == RELKIND_RELATION)
- result = lappend_oid(result, relid);
- else if (relkind == RELKIND_PARTITIONED_TABLE)
+
+ /* Skip if the relkind does not match FOR ALL TABLES / SEQUENCES. */
+ if (!pub_object_type_matches_relkind(objectType, relkind))
+ continue;
+
+ /*
+ * If the object is a partitioned table, lookup all the child relations
+ * (if requested). Otherwise just add the object to the list.
+ */
+ if (relkind == RELKIND_PARTITIONED_TABLE)
{
List *partitionrels = NIL;
@@ -799,7 +948,11 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
pub_partopt,
relForm->oid);
result = list_concat_unique_oid(result, partitionrels);
+ continue;
}
+
+ /* non-partitioned tables and sequences */
+ result = lappend_oid(result, relid);
}
table_endscan(scan);
@@ -809,27 +962,67 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
/*
* Gets the list of all relations published by FOR ALL TABLES IN SCHEMA
- * publication.
+ * or FOR ALL SEQUENCES IN SCHEMA publication.
*/
List *
-GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
+GetAllSchemaPublicationRelations(Oid pubid, char objectType,
+ PublicationPartOpt pub_partopt)
{
List *result = NIL;
- List *pubschemalist = GetPublicationSchemas(pubid);
+ List *pubschemalist = GetPublicationSchemas(pubid, objectType);
ListCell *cell;
+ AssertObjectTypeValid(objectType);
+
foreach(cell, pubschemalist)
{
Oid schemaid = lfirst_oid(cell);
List *schemaRels = NIL;
- schemaRels = GetSchemaPublicationRelations(schemaid, pub_partopt);
+ schemaRels = GetSchemaPublicationRelations(schemaid, objectType,
+ pub_partopt);
result = list_concat(result, schemaRels);
}
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -852,10 +1045,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -966,10 +1161,12 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
*schemarelids;
relids = GetPublicationRelations(publication->oid,
+ PUB_OBJTYPE_TABLE,
publication->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
schemarelids = GetAllSchemaPublicationRelations(publication->oid,
+ PUB_OBJTYPE_TABLE,
publication->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
@@ -1005,3 +1202,71 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ {
+ List *relids,
+ *schemarelids;
+
+ relids = GetPublicationRelations(publication->oid,
+ PUB_OBJTYPE_SEQUENCE,
+ publication->pubviaroot ?
+ PUBLICATION_PART_ROOT :
+ PUBLICATION_PART_LEAF);
+ schemarelids = GetAllSchemaPublicationRelations(publication->oid,
+ PUB_OBJTYPE_SEQUENCE,
+ publication->pubviaroot ?
+ PUBLICATION_PART_ROOT :
+ PUBLICATION_PART_LEAF);
+ sequences = list_concat_unique_oid(relids, schemarelids);
+ }
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index bb1ac30cd19..e2fc70ea887 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -374,6 +374,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPS,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPS.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 1aad2e769cb..92505f4699f 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -67,15 +68,17 @@ typedef struct rf_context
} rf_context;
static List *OpenRelIdList(List *relids);
-static List *OpenTableList(List *tables);
-static void CloseTableList(List *rels);
+static List *OpenRelationList(List *rels, char objectType);
+static void CloseRelationList(List *rels);
static void LockSchemaList(List *schemalist);
-static void PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
+static void PublicationAddRelations(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt);
-static void PublicationDropTables(Oid pubid, List *rels, bool missing_ok);
-static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
- AlterPublicationStmt *stmt);
-static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static void PublicationDropRelations(Oid pubid, List *rels, bool missing_ok);
+static void PublicationAddSchemas(Oid pubid, List *schemas, char objectType,
+ bool if_not_exists, AlterPublicationStmt *stmt);
+static void PublicationDropSchemas(Oid pubid, List *schemas, char objectType,
+ bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
@@ -95,6 +98,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -119,6 +123,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -141,6 +146,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -167,7 +174,9 @@ parse_publication_options(ParseState *pstate,
*/
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas,
+ List **schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -185,12 +194,23 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLES_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA:
+ schemaid = get_namespace_oid(pubobj->name, false);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA:
@@ -204,6 +224,21 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
list_free(search_path);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA:
+ search_path = fetch_search_path(false);
+ if (search_path == NIL) /* nothing valid in search_path? */
+ ereport(ERROR,
+ errcode(ERRCODE_UNDEFINED_SCHEMA),
+ errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+ schemaid = linitial_oid(search_path);
+ list_free(search_path);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
default:
@@ -240,6 +275,14 @@ CheckObjSchemaNotAlreadyInPublication(List *rels, List *schemaidlist,
errdetail("Table \"%s\" in schema \"%s\" is already part of the publication, adding the same schema is not supported.",
RelationGetRelationName(rel),
get_namespace_name(relSchemaId)));
+ else if (checkobjtype == PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA)
+ ereport(ERROR,
+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("cannot add schema \"%s\" to publication",
+ get_namespace_name(relSchemaId)),
+ errdetail("SEquence \"%s\" in schema \"%s\" is already part of the publication, adding the same schema is not supported.",
+ RelationGetRelationName(rel),
+ get_namespace_name(relSchemaId)));
else if (checkobjtype == PUBLICATIONOBJ_TABLE)
ereport(ERROR,
errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -248,6 +291,14 @@ CheckObjSchemaNotAlreadyInPublication(List *rels, List *schemaidlist,
RelationGetRelationName(rel)),
errdetail("Table's schema \"%s\" is already part of the publication or part of the specified schema list.",
get_namespace_name(relSchemaId)));
+ else if (checkobjtype == PUBLICATIONOBJ_SEQUENCE)
+ ereport(ERROR,
+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("cannot add relation \"%s.%s\" to publication",
+ get_namespace_name(relSchemaId),
+ RelationGetRelationName(rel)),
+ errdetail("Sequence's schema \"%s\" is already part of the publication or part of the specified schema list.",
+ get_namespace_name(relSchemaId)));
}
}
}
@@ -615,6 +666,7 @@ TransformPubWhereClauses(List *tables, const char *queryString,
ObjectAddress
CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
{
+ ListCell *lc;
Relation rel;
ObjectAddress myself;
Oid puboid;
@@ -626,9 +678,31 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
bool publish_via_partition_root_given;
bool publish_via_partition_root;
AclResult aclresult;
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
+ bool for_all_tables = false;
+ bool for_all_sequences = false;
+
+ /*
+ * XXX Translate the list of object types (represented by strings) to bool
+ * flags. Not sure this is the best way, maybe it should be done in the
+ * grammar directly, somehow.
+ *
+ * XXX Should prevent duplicates, like FOR ALL TABLES, TABLES, TABLES?
+ */
+ foreach (lc, stmt->for_all_objects)
+ {
+ char *val = strVal(lfirst(lc));
+ if (strcmp(val, "tables") == 0)
+ for_all_tables = true;
+ else if (strcmp(val, "sequences") == 0)
+ for_all_sequences = true;
+ }
+
/* must have CREATE privilege on database */
aclresult = pg_database_aclcheck(MyDatabaseId, GetUserId(), ACL_CREATE);
if (aclresult != ACLCHECK_OK)
@@ -636,7 +710,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
get_database_name(MyDatabaseId));
/* FOR ALL TABLES requires superuser */
- if (stmt->for_all_tables && !superuser())
+ if (for_all_tables && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES publication")));
@@ -672,7 +746,9 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
Anum_pg_publication_oid);
values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
values[Anum_pg_publication_puballtables - 1] =
- BoolGetDatum(stmt->for_all_tables);
+ BoolGetDatum(for_all_tables);
+ values[Anum_pg_publication_puballsequences - 1] =
+ BoolGetDatum(for_all_sequences);
values[Anum_pg_publication_pubinsert - 1] =
BoolGetDatum(pubactions.pubinsert);
values[Anum_pg_publication_pubupdate - 1] =
@@ -681,6 +757,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -698,45 +776,84 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
CommandCounterIncrement();
/* Associate objects with the publication. */
- if (stmt->for_all_tables)
+ if (for_all_tables || for_all_sequences)
{
/* Invalidate relcache so that publication info is rebuilt. */
CacheInvalidateRelcacheAll();
}
- else
+
+ if (!for_all_tables || !for_all_sequences)
{
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
- if (list_length(schemaidlist) > 0 && !superuser())
+ if (list_length(tables_schemaidlist) > 0 && !superuser())
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES IN SCHEMA publication"));
- if (list_length(relations) > 0)
+ /* FOR ALL SEQUENCES IN SCHEMA requires superuser */
+ if (list_length(sequences_schemaidlist) > 0 && !superuser())
+ ereport(ERROR,
+ errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("must be superuser to create FOR ALL SEQUENCES IN SCHEMA publication"));
+
+ /* tables added directly */
+ if (list_length(tables) > 0)
{
List *rels;
- rels = OpenTableList(relations);
- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ rels = OpenRelationList(tables, PUB_OBJTYPE_TABLE);
+ CheckObjSchemaNotAlreadyInPublication(rels, tables_schemaidlist,
PUBLICATIONOBJ_TABLE);
TransformPubWhereClauses(rels, pstate->p_sourcetext,
publish_via_partition_root);
- PublicationAddTables(puboid, rels, true, NULL);
- CloseTableList(rels);
+ PublicationAddRelations(puboid, rels, true, NULL);
+ CloseRelationList(rels);
}
- if (list_length(schemaidlist) > 0)
+ /* sequences added directly */
+ if (list_length(sequences) > 0)
+ {
+ List *rels;
+
+ rels = OpenRelationList(sequences, PUB_OBJTYPE_SEQUENCE);
+ CheckObjSchemaNotAlreadyInPublication(rels, sequences_schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddRelations(puboid, rels, true, NULL);
+ CloseRelationList(rels);
+ }
+
+ /* tables added through a schema */
+ if (list_length(tables_schemaidlist) > 0)
{
/*
* Schema lock is held until the publication is created to prevent
* concurrent schema deletion.
*/
- LockSchemaList(schemaidlist);
- PublicationAddSchemas(puboid, schemaidlist, true, NULL);
+ LockSchemaList(tables_schemaidlist);
+ PublicationAddSchemas(puboid,
+ tables_schemaidlist, PUB_OBJTYPE_TABLE,
+ true, NULL);
+ }
+
+ /* sequences added through a schema */
+ if (list_length(sequences_schemaidlist) > 0)
+ {
+ /*
+ * Schema lock is held until the publication is created to prevent
+ * concurrent schema deletion.
+ */
+ LockSchemaList(sequences_schemaidlist);
+ PublicationAddSchemas(puboid,
+ sequences_schemaidlist, PUB_OBJTYPE_SEQUENCE,
+ true, NULL);
}
}
@@ -799,6 +916,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
AccessShareLock);
root_relids = GetPublicationRelations(pubform->oid,
+ PUB_OBJTYPE_TABLE,
PUBLICATION_PART_ROOT);
foreach(lc, root_relids)
@@ -857,6 +975,9 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
values[Anum_pg_publication_pubtruncate - 1] = BoolGetDatum(pubactions.pubtruncate);
replaces[Anum_pg_publication_pubtruncate - 1] = true;
+
+ values[Anum_pg_publication_pubsequence - 1] = BoolGetDatum(pubactions.pubsequence);
+ replaces[Anum_pg_publication_pubsequence - 1] = true;
}
if (publish_via_partition_root_given)
@@ -876,7 +997,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
pubform = (Form_pg_publication) GETSTRUCT(tup);
/* Invalidate the relcache. */
- if (pubform->puballtables)
+ if (pubform->puballtables || pubform->puballsequences)
{
CacheInvalidateRelcacheAll();
}
@@ -892,6 +1013,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
*/
if (root_relids == NIL)
relids = GetPublicationRelations(pubform->oid,
+ PUB_OBJTYPE_TABLE,
PUBLICATION_PART_ALL);
else
{
@@ -905,7 +1027,20 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
lfirst_oid(lc));
}
+ /* tables */
+ schemarelids = GetAllSchemaPublicationRelations(pubform->oid,
+ PUB_OBJTYPE_TABLE,
+ PUBLICATION_PART_ALL);
+ relids = list_concat_unique_oid(relids, schemarelids);
+
+ /* sequences */
+ relids = list_concat_unique_oid(relids,
+ GetPublicationRelations(pubform->oid,
+ PUB_OBJTYPE_SEQUENCE,
+ PUBLICATION_PART_ALL));
+
schemarelids = GetAllSchemaPublicationRelations(pubform->oid,
+ PUB_OBJTYPE_SEQUENCE,
PUBLICATION_PART_ALL);
relids = list_concat_unique_oid(relids, schemarelids);
@@ -960,7 +1095,7 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
if (!tables && stmt->action != AP_SetObjects)
return;
- rels = OpenTableList(tables);
+ rels = OpenRelationList(tables, PUB_OBJTYPE_TABLE);
if (stmt->action == AP_AddObjects)
{
@@ -970,19 +1105,22 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
* Check if the relation is member of the existing schema in the
* publication or member of the schema list specified.
*/
- schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid));
+ schemas = list_concat_copy(schemaidlist,
+ GetPublicationSchemas(pubid,
+ PUB_OBJTYPE_TABLE));
CheckObjSchemaNotAlreadyInPublication(rels, schemas,
PUBLICATIONOBJ_TABLE);
TransformPubWhereClauses(rels, queryString, pubform->pubviaroot);
- PublicationAddTables(pubid, rels, false, stmt);
+ PublicationAddRelations(pubid, rels, false, stmt);
}
else if (stmt->action == AP_DropObjects)
- PublicationDropTables(pubid, rels, false);
+ PublicationDropRelations(pubid, rels, false);
else /* AP_SetObjects */
{
List *oldrelids = GetPublicationRelations(pubid,
+ PUB_OBJTYPE_TABLE,
PUBLICATION_PART_ROOT);
List *delrels = NIL;
ListCell *oldlc;
@@ -1064,18 +1202,18 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
}
/* And drop them. */
- PublicationDropTables(pubid, delrels, true);
+ PublicationDropRelations(pubid, delrels, true);
/*
* Don't bother calculating the difference for adding, we'll catch and
* skip existing ones when doing catalog update.
*/
- PublicationAddTables(pubid, rels, true, stmt);
+ PublicationAddRelations(pubid, rels, true, stmt);
- CloseTableList(delrels);
+ CloseRelationList(delrels);
}
- CloseTableList(rels);
+ CloseRelationList(rels);
}
/*
@@ -1085,7 +1223,8 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
*/
static void
AlterPublicationSchemas(AlterPublicationStmt *stmt,
- HeapTuple tup, List *schemaidlist)
+ HeapTuple tup, List *schemaidlist,
+ char objectType)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
@@ -1107,20 +1246,20 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
List *rels;
List *reloids;
- reloids = GetPublicationRelations(pubform->oid, PUBLICATION_PART_ROOT);
+ reloids = GetPublicationRelations(pubform->oid, objectType, PUBLICATION_PART_ROOT);
rels = OpenRelIdList(reloids);
CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
PUBLICATIONOBJ_TABLES_IN_SCHEMA);
- CloseTableList(rels);
- PublicationAddSchemas(pubform->oid, schemaidlist, false, stmt);
+ CloseRelationList(rels);
+ PublicationAddSchemas(pubform->oid, schemaidlist, objectType, false, stmt);
}
else if (stmt->action == AP_DropObjects)
- PublicationDropSchemas(pubform->oid, schemaidlist, false);
+ PublicationDropSchemas(pubform->oid, schemaidlist, objectType, false);
else /* AP_SetObjects */
{
- List *oldschemaids = GetPublicationSchemas(pubform->oid);
+ List *oldschemaids = GetPublicationSchemas(pubform->oid, objectType);
List *delschemas = NIL;
/* Identify which schemas should be dropped */
@@ -1133,13 +1272,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
LockSchemaList(delschemas);
/* And drop them */
- PublicationDropSchemas(pubform->oid, delschemas, true);
+ PublicationDropSchemas(pubform->oid, delschemas, objectType, true);
/*
* Don't bother calculating the difference for adding, we'll catch and
* skip existing ones when doing catalog update.
*/
- PublicationAddSchemas(pubform->oid, schemaidlist, true, stmt);
+ PublicationAddSchemas(pubform->oid, schemaidlist, objectType, true, stmt);
}
}
@@ -1149,12 +1288,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
*/
static void
CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
- List *tables, List *schemaidlist)
+ List *tables, List *tables_schemaidlist,
+ List *sequences, List *sequences_schemaidlist)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
- schemaidlist && !superuser())
+ (tables_schemaidlist || sequences_schemaidlist) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to add or set schemas")));
@@ -1163,13 +1303,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
* Check that user is allowed to manipulate the publication tables in
* schema
*/
- if (schemaidlist && pubform->puballtables)
+ if (tables_schemaidlist && pubform->puballtables)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables from schema cannot be added to, dropped from, or set on FOR ALL TABLES publications.")));
+ /*
+ * Check that user is allowed to manipulate the publication sequences in
+ * schema
+ */
+ if (sequences_schemaidlist && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
/* Check that user is allowed to manipulate the publication tables. */
if (tables && pubform->puballtables)
ereport(ERROR,
@@ -1177,6 +1328,107 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (sequences && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+ List *sequences, List *schemaidlist)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /*
+ * It is quite possible that for the SET case user has not specified any
+ * tables in which case we need to remove all the existing tables.
+ */
+ if (!sequences && stmt->action != AP_SetObjects)
+ return;
+
+ rels = OpenRelationList(sequences, PUB_OBJTYPE_SEQUENCE);
+
+ if (stmt->action == AP_AddObjects)
+ {
+ List *schemas = NIL;
+
+ /*
+ * Check if the relation is member of the existing schema in the
+ * publication or member of the schema list specified.
+ */
+ schemas = list_concat_copy(schemaidlist,
+ GetPublicationSchemas(pubid,
+ PUB_OBJTYPE_SEQUENCE));
+ CheckObjSchemaNotAlreadyInPublication(rels, schemas,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddRelations(pubid, rels, false, stmt);
+ }
+ else if (stmt->action == AP_DropObjects)
+ PublicationDropRelations(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUB_OBJTYPE_SEQUENCE,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ PublicationRelInfo *oldrel;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ PublicationRelInfo *newpubrel;
+
+ newpubrel = (PublicationRelInfo *) lfirst(newlc);
+ if (RelationGetRelid(newpubrel->relation) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+ /* Not yet in the list, open it and add to the list */
+ if (!found)
+ {
+ oldrel = palloc(sizeof(PublicationRelInfo));
+ oldrel->whereClause = NULL;
+ oldrel->relation = table_open(oldrelid,
+ ShareUpdateExclusiveLock);
+ delrels = lappend(delrels, oldrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropRelations(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddRelations(pubid, rels, true, stmt);
+
+ CloseRelationList(delrels);
+ }
+
+ CloseRelationList(rels);
}
/*
@@ -1214,14 +1466,22 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
AlterPublicationOptions(pstate, stmt, rel, tup);
else
{
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
Oid pubid = pubform->oid;
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
- CheckAlterPublication(stmt, tup, relations, schemaidlist);
+ CheckAlterPublication(stmt, tup,
+ tables, tables_schemaidlist,
+ sequences, sequences_schemaidlist);
heap_freetuple(tup);
@@ -1249,9 +1509,16 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
errmsg("publication \"%s\" does not exist",
stmt->pubname));
- AlterPublicationTables(stmt, tup, relations, schemaidlist,
+ AlterPublicationTables(stmt, tup, tables, tables_schemaidlist,
pstate->p_sourcetext);
- AlterPublicationSchemas(stmt, tup, schemaidlist);
+
+ AlterPublicationSequences(stmt, tup, sequences, sequences_schemaidlist);
+
+ AlterPublicationSchemas(stmt, tup, tables_schemaidlist,
+ PUB_OBJTYPE_TABLE);
+
+ AlterPublicationSchemas(stmt, tup, sequences_schemaidlist,
+ PUB_OBJTYPE_SEQUENCE);
}
/* Cleanup. */
@@ -1319,7 +1586,7 @@ RemovePublicationById(Oid pubid)
pubform = (Form_pg_publication) GETSTRUCT(tup);
/* Invalidate relcache so that publication info is rebuilt. */
- if (pubform->puballtables)
+ if (pubform->puballtables || pubform->puballsequences)
CacheInvalidateRelcacheAll();
CatalogTupleDelete(rel, &tup->t_self);
@@ -1355,6 +1622,7 @@ RemovePublicationSchemaById(Oid psoid)
* partitions.
*/
schemaRels = GetSchemaPublicationRelations(pubsch->pnnspid,
+ pubsch->pntype,
PUBLICATION_PART_ALL);
InvalidatePublicationRels(schemaRels);
@@ -1397,29 +1665,45 @@ OpenRelIdList(List *relids)
* add them to a publication.
*/
static List *
-OpenTableList(List *tables)
+OpenRelationList(List *rels, char objectType)
{
List *relids = NIL;
- List *rels = NIL;
+ List *result = NIL;
ListCell *lc;
List *relids_with_rf = NIL;
/*
* Open, share-lock, and check all the explicitly-specified relations
*/
- foreach(lc, tables)
+ foreach(lc, rels)
{
PublicationTable *t = lfirst_node(PublicationTable, lc);
bool recurse = t->relation->inh;
Relation rel;
Oid myrelid;
PublicationRelInfo *pub_rel;
+ char myrelkind;
/* Allow query cancel in case this takes a long time */
CHECK_FOR_INTERRUPTS();
rel = table_openrv(t->relation, ShareUpdateExclusiveLock);
myrelid = RelationGetRelid(rel);
+ myrelkind = get_rel_relkind(myrelid);
+
+ /*
+ * Make sure the relkind matches the expected object type. This happens
+ * e.g. when adding a sequence using ADD TABLE or a table using ADD
+ * SEQUENCE).
+ *
+ * XXX We let through unsupported object types (views etc.). Those are
+ * caught later in check_publication_add_relation. A bit weird.
+ *
+ * FIXME Use a better error message.
+ */
+ if (pub_get_object_type_for_relkind(myrelkind) != PUB_OBJTYPE_UNSUPPORTED &&
+ pub_get_object_type_for_relkind(myrelkind) != objectType)
+ elog(ERROR, "object type mismatch");
/*
* Filter out duplicates if user specifies "foo, foo".
@@ -1444,7 +1728,7 @@ OpenTableList(List *tables)
pub_rel = palloc(sizeof(PublicationRelInfo));
pub_rel->relation = rel;
pub_rel->whereClause = t->whereClause;
- rels = lappend(rels, pub_rel);
+ result = lappend(result, pub_rel);
relids = lappend_oid(relids, myrelid);
if (t->whereClause)
@@ -1498,7 +1782,7 @@ OpenTableList(List *tables)
pub_rel->relation = rel;
/* child inherits WHERE clause from parent */
pub_rel->whereClause = t->whereClause;
- rels = lappend(rels, pub_rel);
+ result = lappend(result, pub_rel);
relids = lappend_oid(relids, childrelid);
if (t->whereClause)
@@ -1510,14 +1794,14 @@ OpenTableList(List *tables)
list_free(relids);
list_free(relids_with_rf);
- return rels;
+ return result;
}
/*
* Close all relations in the list.
*/
static void
-CloseTableList(List *rels)
+CloseRelationList(List *rels)
{
ListCell *lc;
@@ -1565,12 +1849,12 @@ LockSchemaList(List *schemalist)
* Add listed tables to the publication.
*/
static void
-PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
+PublicationAddRelations(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt)
{
ListCell *lc;
- Assert(!stmt || !stmt->for_all_tables);
+ Assert(!stmt || !stmt->for_all_objects);
foreach(lc, rels)
{
@@ -1599,7 +1883,7 @@ PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
* Remove listed tables from the publication.
*/
static void
-PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
+PublicationDropRelations(Oid pubid, List *rels, bool missing_ok)
{
ObjectAddress obj;
ListCell *lc;
@@ -1639,19 +1923,19 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
* Add listed schemas to the publication.
*/
static void
-PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
- AlterPublicationStmt *stmt)
+PublicationAddSchemas(Oid pubid, List *schemas, char objectType,
+ bool if_not_exists, AlterPublicationStmt *stmt)
{
ListCell *lc;
- Assert(!stmt || !stmt->for_all_tables);
+ Assert(!stmt || !stmt->for_all_objects);
foreach(lc, schemas)
{
Oid schemaid = lfirst_oid(lc);
ObjectAddress obj;
- obj = publication_add_schema(pubid, schemaid, if_not_exists);
+ obj = publication_add_schema(pubid, schemaid, objectType, if_not_exists);
if (stmt)
{
EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
@@ -1667,7 +1951,7 @@ PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
* Remove listed schemas from the publication.
*/
static void
-PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
+PublicationDropSchemas(Oid pubid, List *schemas, char objectType, bool missing_ok)
{
ObjectAddress obj;
ListCell *lc;
@@ -1677,10 +1961,11 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
{
Oid schemaid = lfirst_oid(lc);
- psid = GetSysCacheOid2(PUBLICATIONNAMESPACEMAP,
+ psid = GetSysCacheOid3(PUBLICATIONNAMESPACEMAP,
Anum_pg_publication_namespace_oid,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pubid));
+ ObjectIdGetDatum(pubid),
+ CharGetDatum(objectType));
if (!OidIsValid(psid))
{
if (missing_ok)
@@ -1735,6 +2020,13 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
NameStr(form->pubname)),
errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+ if (form->puballsequences && !superuser_arg(newOwnerId))
+ ereport(ERROR,
+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("permission denied to change owner of publication \"%s\"",
+ NameStr(form->pubname)),
+ errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser.")));
+
if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index ab592ce2f15..f3641345b32 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -336,6 +336,85 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Set a sequence to a specified internal state.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+SetSequence(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ Relation seq_rel;
+ SeqTable elm;
+ Form_pg_sequence_data seq;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ HeapTuple tuple;
+
+ /*
+ * Read the old sequence. This does a bit more work than really
+ * necessary, but it's simple, and we do want to double-check that it's
+ * indeed a sequence.
+ */
+ init_sequence(seq_relid, &elm, &seq_rel);
+ (void) read_seq_tuple(seq_rel, &buf, &seqdatatuple);
+
+ /*
+ * Copy the existing sequence tuple.
+ */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to execute the restart (compare the RESTART
+ * action in AlterSequence)
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence.
+ */
+ RelationSetNewRelfilenode(seq_rel, seq_rel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seq_rel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seq_rel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file.
+ *
+ * XXX Maybe this should also use created=true, just like the other places
+ * calling fill_seq_with_data. That's probably needed for correct cascading
+ * replication.
+ *
+ * XXX That'd mean all fill_seq_with_data callers use created=true, making
+ * the parameter unnecessary.
+ */
+ fill_seq_with_data(seq_rel, tuple);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seq_rel, NoLock);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*/
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 3922658bbca..0824c79bf8c 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -87,6 +87,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -510,9 +511,9 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
{
char *err;
WalReceiverConn *wrconn;
- List *tables;
+ List *relations;
ListCell *lc;
- char table_state;
+ char sync_state;
/* Try to connect to the publisher. */
wrconn = walrcv_connect(conninfo, true, stmt->subname, &err);
@@ -527,14 +528,17 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
* Set sync state based on if we were asked to do data copy or
* not.
*/
- table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+ sync_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
/*
- * Get the table list from publisher and build local table status
- * info.
+ * Get the table and sequence list from publisher and build
+ * local relation sync status info.
*/
- tables = fetch_table_list(wrconn, publications);
- foreach(lc, tables)
+ relations = fetch_table_list(wrconn, publications);
+ relations = list_concat(relations,
+ fetch_sequence_list(wrconn, publications));
+
+ foreach(lc, relations)
{
RangeVar *rv = (RangeVar *) lfirst(lc);
Oid relid;
@@ -545,7 +549,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
CheckSubscriptionRelkind(get_rel_relkind(relid),
rv->schemaname, rv->relname);
- AddSubscriptionRelState(subid, relid, table_state,
+ AddSubscriptionRelState(subid, relid, sync_state,
InvalidXLogRecPtr);
}
@@ -571,12 +575,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
*
* Note that if tables were specified but copy_data is false
* then it is safe to enable two_phase up-front because those
- * tables are already initially in READY state. When the
- * subscription has no tables, we leave the twophase state as
- * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
+ * relations are already initially in READY state. When the
+ * subscription has no relations, we leave the twophase state
+ * as PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
* PUBLICATION to work.
*/
- if (opts.twophase && !opts.copy_data && tables != NIL)
+ if (opts.twophase && !opts.copy_data && relations != NIL)
twophase_enabled = true;
walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -646,8 +650,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
PG_TRY();
{
- /* Get the table list from publisher. */
+ /* Get the list of relations from publisher. */
pubrel_names = fetch_table_list(wrconn, sub->publications);
+ pubrel_names = list_concat(pubrel_names,
+ fetch_sequence_list(wrconn, sub->publications));
/* Get local table list. */
subrel_states = GetSubscriptionRelations(sub->oid);
@@ -1639,6 +1645,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated sequences from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index dc5872f988c..d74be1a6a74 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -41,6 +41,7 @@
#include "catalog/pg_inherits.h"
#include "catalog/pg_namespace.h"
#include "catalog/pg_opclass.h"
+#include "catalog/pg_publication_namespace.h"
#include "catalog/pg_statistic_ext.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_trigger.h"
@@ -16358,11 +16359,14 @@ AlterTableNamespace(AlterObjectSchemaStmt *stmt, Oid *oldschema)
* Check that setting the relation to a different schema won't result in a
* publication having both a schema and the same schema's table, as this
* is not supported.
+ *
+ * XXX The blocks are almost exactly the same, so maybe combine them into
+ * a single block.
*/
if (stmt->objectType == OBJECT_TABLE)
{
ListCell *lc;
- List *schemaPubids = GetSchemaPublications(nspOid);
+ List *schemaPubids = GetSchemaPublications(nspOid, PUB_OBJTYPE_TABLE);
List *relPubids = GetRelationPublications(RelationGetRelid(rel));
foreach(lc, relPubids)
@@ -16380,6 +16384,27 @@ AlterTableNamespace(AlterObjectSchemaStmt *stmt, Oid *oldschema)
get_publication_name(pubid, false)));
}
}
+ else if (stmt->objectType == OBJECT_SEQUENCE)
+ {
+ ListCell *lc;
+ List *schemaPubids = GetSchemaPublications(nspOid, PUB_OBJTYPE_SEQUENCE);
+ List *relPubids = GetRelationPublications(RelationGetRelid(rel));
+
+ foreach(lc, relPubids)
+ {
+ Oid pubid = lfirst_oid(lc);
+
+ if (list_member_oid(schemaPubids, pubid))
+ ereport(ERROR,
+ errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("cannot move sequence \"%s\" to schema \"%s\"",
+ RelationGetRelationName(rel), stmt->newschema),
+ errdetail("The schema \"%s\" and same schema's sequence \"%s\" cannot be part of the same publication \"%s\".",
+ stmt->newschema,
+ RelationGetRelationName(rel),
+ get_publication_name(pubid, false)));
+ }
+ }
/* common checks on switching namespaces */
CheckSetNamespace(oldNspOid, nspOid);
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 13328141e23..0df7cf58747 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -636,7 +636,9 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION &&
+ relkind != RELKIND_PARTITIONED_TABLE &&
+ relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index d4f8455a2bd..55f720a88f4 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4862,7 +4862,7 @@ _copyCreatePublicationStmt(const CreatePublicationStmt *from)
COPY_STRING_FIELD(pubname);
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
- COPY_SCALAR_FIELD(for_all_tables);
+ COPY_NODE_FIELD(for_all_objects);
return newnode;
}
@@ -4875,7 +4875,7 @@ _copyAlterPublicationStmt(const AlterPublicationStmt *from)
COPY_STRING_FIELD(pubname);
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
- COPY_SCALAR_FIELD(for_all_tables);
+ COPY_NODE_FIELD(for_all_objects);
COPY_SCALAR_FIELD(action);
return newnode;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index f1002afe7a0..82562eb9b87 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2333,7 +2333,7 @@ _equalCreatePublicationStmt(const CreatePublicationStmt *a,
COMPARE_STRING_FIELD(pubname);
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
- COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_NODE_FIELD(for_all_objects);
return true;
}
@@ -2345,7 +2345,7 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_STRING_FIELD(pubname);
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
- COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_NODE_FIELD(for_all_objects);
COMPARE_SCALAR_FIELD(action);
return true;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a03b33b53bd..515361cbda9 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -446,7 +446,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
transform_element_list transform_type_list
TriggerTransitions TriggerReferencing
vacuum_relation_list opt_vacuum_relation_list
- drop_option_list pub_obj_list
+ drop_option_list pub_obj_list pub_obj_type_list
%type <node> opt_routine_body
%type <groupclause> group_clause
@@ -575,6 +575,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
%type <node> var_value zone_value
%type <rolespec> auth_ident RoleSpec opt_granted_by
%type <publicationobjectspec> PublicationObjSpec
+%type <node> pub_obj_type
%type <keyword> unreserved_keyword type_func_name_keyword
%type <keyword> col_name_keyword reserved_keyword
@@ -9701,12 +9702,9 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
*
* CREATE PUBLICATION FOR ALL TABLES [WITH options]
*
- * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
- *
- * pub_obj is one of:
+ * CREATE PUBLICATION FOR ALL SEQUENCES [WITH options]
*
- * TABLE table [, ...]
- * ALL TABLES IN SCHEMA schema [, ...]
+ * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
*
*****************************************************************************/
@@ -9718,12 +9716,12 @@ CreatePublicationStmt:
n->options = $4;
$$ = (Node *)n;
}
- | CREATE PUBLICATION name FOR ALL TABLES opt_definition
+ | CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
{
CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
n->pubname = $3;
n->options = $7;
- n->for_all_tables = true;
+ n->for_all_objects = $6;
$$ = (Node *)n;
}
| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -9772,6 +9770,26 @@ PublicationObjSpec:
$$->pubobjtype = PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA;
$$->location = @5;
}
+ | SEQUENCE relation_expr
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+ $$->pubtable = makeNode(PublicationTable);
+ $$->pubtable->relation = $2;
+ }
+ | ALL SEQUENCES IN_P SCHEMA ColId
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA;
+ $$->name = $5;
+ $$->location = @5;
+ }
+ | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA;
+ $$->location = @5;
+ }
| ColId OptWhereClause
{
$$ = makeNode(PublicationObjSpec);
@@ -9826,6 +9844,19 @@ pub_obj_list: PublicationObjSpec
{ $$ = lappend($1, $3); }
;
+pub_obj_type: TABLES
+ { $$ = (Node *) makeString("tables"); }
+ | SEQUENCES
+ { $$ = (Node *) makeString("sequences"); }
+ ;
+
+pub_obj_type_list: pub_obj_type
+ { $$ = list_make1($1); }
+ | pub_obj_type_list ',' pub_obj_type
+ { $$ = lappend($1, $3); }
+ ;
+
+
/*****************************************************************************
*
* ALTER PUBLICATION name SET ( options )
@@ -9836,11 +9867,6 @@ pub_obj_list: PublicationObjSpec
*
* ALTER PUBLICATION name SET pub_obj [, ...]
*
- * pub_obj is one of:
- *
- * TABLE table_name [, ...]
- * ALL TABLES IN SCHEMA schema_name [, ...]
- *
*****************************************************************************/
AlterPublicationStmt:
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index c9b0eeefd7e..3dbe85d61a2 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -648,6 +648,56 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1203,6 +1253,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 1659964571c..033ce642de3 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -359,6 +360,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX Maybe rename this to "_relations_" to cover both tables and sequences.
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -999,6 +1002,94 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+/*
+ * Fetch sequence data (current state) from the remote node.
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ List *qual = NIL;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel, &qual);
+
+ /* sequences don't have row filters */
+ Assert(!qual);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ SetSequence(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
/*
* Determine the tablesync slot name.
*
@@ -1260,10 +1351,21 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ /* Do the right action depending on the relation kind. */
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 03e069c7cdd..a67e61697ed 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -142,6 +142,7 @@
#include "catalog/pg_subscription.h"
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_tablespace.h"
+#include "commands/sequence.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
#include "commands/trigger.h"
@@ -1097,6 +1098,57 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by SetSequence). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by SetSequence */
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ /* apply the sequence change */
+ SetSequence(relid, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2425,6 +2477,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 5fddab3a3d4..4cdc698cbb3 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -15,6 +15,7 @@
#include "access/tupconvert.h"
#include "catalog/partition.h"
#include "catalog/pg_publication.h"
+#include "catalog/pg_publication_namespace.h"
#include "catalog/pg_publication_rel.h"
#include "commands/defrem.h"
#include "executor/executor.h"
@@ -53,6 +54,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation relation, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -208,6 +213,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -224,6 +230,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -237,6 +244,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -244,6 +252,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -312,6 +321,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -1440,6 +1459,51 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation relation, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(relation))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, relation);
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ relation,
+ xid,
+ sequence_lsn,
+ transactional,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1725,7 +1789,8 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->schema_sent = false;
entry->streamed_txns = NIL;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->new_slot = NULL;
entry->old_slot = NULL;
memset(entry->exprstate, 0, sizeof(entry->exprstate));
@@ -1739,13 +1804,13 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
{
Oid schemaId = get_rel_namespace(relid);
List *pubids = GetRelationPublications(relid);
-
+ char objectType = pub_get_object_type_for_relkind(get_rel_relkind(relid));
/*
* We don't acquire a lock on the namespace system table as we build
* the cache entry using a historic snapshot and all the later changes
* are absorbed while decoding WAL.
*/
- List *schemaPubids = GetSchemaPublications(schemaId);
+ List *schemaPubids = GetSchemaPublications(schemaId, objectType);
ListCell *lc;
Oid publish_as_relid = relid;
int publish_ancestor_level = 0;
@@ -1780,6 +1845,7 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
/*
* Tuple slots cleanups. (Will be rebuilt later if needed).
@@ -1826,9 +1892,11 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
/*
* If this is a FOR ALL TABLES publication, pick the partition root
- * and set the ancestor level accordingly.
+ * and set the ancestor level accordingly. If this is a FOR ALL
+ * SEQUENCES publication, we publish it too but we don't need to
+ * pick the partition root etc.
*/
- if (pub->alltables)
+ if (pub->alltables || pub->allsequences)
{
publish = true;
if (pub->pubviaroot && am_partition)
@@ -1889,6 +1957,7 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
/*
* We want to publish the changes as the top-most ancestor
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index fccffce5729..b9f4305754e 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -56,6 +56,7 @@
#include "catalog/pg_opclass.h"
#include "catalog/pg_proc.h"
#include "catalog/pg_publication.h"
+#include "catalog/pg_publication_namespace.h"
#include "catalog/pg_rewrite.h"
#include "catalog/pg_shseclabel.h"
#include "catalog/pg_statistic_ext.h"
@@ -5543,6 +5544,8 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
Oid schemaid;
List *ancestors = NIL;
Oid relid = RelationGetRelid(relation);
+ char relkind = relation->rd_rel->relkind;
+ char objType;
/*
* If not publishable, it publishes no actions. (pgoutput_change() will
@@ -5569,8 +5572,15 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
/* Fetch the publication membership info. */
puboids = GetRelationPublications(relid);
schemaid = RelationGetNamespace(relation);
- puboids = list_concat_unique_oid(puboids, GetSchemaPublications(schemaid));
+ objType = pub_get_object_type_for_relkind(relkind);
+ puboids = list_concat_unique_oid(puboids,
+ GetSchemaPublications(schemaid, objType));
+
+ /*
+ * If this is a partion (and thus a table), lookup all ancestors and track
+ * all publications them too.
+ */
if (relation->rd_rel->relispartition)
{
/* Add publications that the ancestors are in too. */
@@ -5582,12 +5592,23 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
puboids = list_concat_unique_oid(puboids,
GetRelationPublications(ancestor));
+
+ /* include all publications publishing schema of all ancestors */
schemaid = get_rel_namespace(ancestor);
puboids = list_concat_unique_oid(puboids,
- GetSchemaPublications(schemaid));
+ GetSchemaPublications(schemaid,
+ PUB_OBJTYPE_TABLE));
}
}
- puboids = list_concat_unique_oid(puboids, GetAllTablesPublications());
+
+ /*
+ * Consider also FOR ALL TABLES and FOR ALL SEQUENCES publications,
+ * depending on the relkind of the relation.
+ */
+ if (relation->rd_rel->relkind == RELKIND_SEQUENCE)
+ puboids = list_concat_unique_oid(puboids, GetAllSequencesPublications());
+ else
+ puboids = list_concat_unique_oid(puboids, GetAllTablesPublications());
foreach(lc, puboids)
{
@@ -5606,6 +5627,7 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
pubdesc->pubactions.pubupdate |= pubform->pubupdate;
pubdesc->pubactions.pubdelete |= pubform->pubdelete;
pubdesc->pubactions.pubtruncate |= pubform->pubtruncate;
+ pubdesc->pubactions.pubsequence |= pubform->pubsequence;
/*
* Check if all columns referenced in the filter expression are part of
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index f4e7819f1e2..a675877d195 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -630,12 +630,12 @@ static const struct cachedesc cacheinfo[] = {
64
},
{PublicationNamespaceRelationId, /* PUBLICATIONNAMESPACEMAP */
- PublicationNamespacePnnspidPnpubidIndexId,
- 2,
+ PublicationNamespacePnnspidPnpubidPntypeIndexId,
+ 3,
{
Anum_pg_publication_namespace_pnnspid,
Anum_pg_publication_namespace_pnpubid,
- 0,
+ Anum_pg_publication_namespace_pntype,
0
},
64
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 725cd2e4ebc..ddc8567779f 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -3814,10 +3814,12 @@ getPublications(Archive *fout, int *numPublications)
int i_pubname;
int i_pubowner;
int i_puballtables;
+ int i_puballsequences;
int i_pubinsert;
int i_pubupdate;
int i_pubdelete;
int i_pubtruncate;
+ int i_pubsequence;
int i_pubviaroot;
int i,
ntups;
@@ -3833,23 +3835,29 @@ getPublications(Archive *fout, int *numPublications)
resetPQExpBuffer(query);
/* Get the publications. */
- if (fout->remoteVersion >= 130000)
+ if (fout->remoteVersion >= 150000)
+ appendPQExpBuffer(query,
+ "SELECT p.tableoid, p.oid, p.pubname, "
+ "p.pubowner, "
+ "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubsequence, p.pubviaroot "
+ "FROM pg_publication p");
+ else if (fout->remoteVersion >= 130000)
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+ "p.puballtables, false AS puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubsequence, p.pubviaroot "
"FROM pg_publication p");
else if (fout->remoteVersion >= 110000)
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+ "p.puballtables, false AS puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubsequence, false AS pubviaroot "
"FROM pg_publication p");
else
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+ "p.puballtables, false AS puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubsequence, false AS pubviaroot "
"FROM pg_publication p");
res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -3861,10 +3869,12 @@ getPublications(Archive *fout, int *numPublications)
i_pubname = PQfnumber(res, "pubname");
i_pubowner = PQfnumber(res, "pubowner");
i_puballtables = PQfnumber(res, "puballtables");
+ i_puballsequences = PQfnumber(res, "puballsequences");
i_pubinsert = PQfnumber(res, "pubinsert");
i_pubupdate = PQfnumber(res, "pubupdate");
i_pubdelete = PQfnumber(res, "pubdelete");
i_pubtruncate = PQfnumber(res, "pubtruncate");
+ i_pubsequence = PQfnumber(res, "pubsequence");
i_pubviaroot = PQfnumber(res, "pubviaroot");
pubinfo = pg_malloc(ntups * sizeof(PublicationInfo));
@@ -3880,6 +3890,8 @@ getPublications(Archive *fout, int *numPublications)
pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
pubinfo[i].puballtables =
(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+ pubinfo[i].puballsequences =
+ (strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
pubinfo[i].pubinsert =
(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
pubinfo[i].pubupdate =
@@ -3888,6 +3900,8 @@ getPublications(Archive *fout, int *numPublications)
(strcmp(PQgetvalue(res, i, i_pubdelete), "t") == 0);
pubinfo[i].pubtruncate =
(strcmp(PQgetvalue(res, i, i_pubtruncate), "t") == 0);
+ pubinfo[i].pubsequence =
+ (strcmp(PQgetvalue(res, i, i_pubsequence), "t") == 0);
pubinfo[i].pubviaroot =
(strcmp(PQgetvalue(res, i, i_pubviaroot), "t") == 0);
@@ -3933,6 +3947,9 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
if (pubinfo->puballtables)
appendPQExpBufferStr(query, " FOR ALL TABLES");
+ if (pubinfo->puballsequences)
+ appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
+
appendPQExpBufferStr(query, " WITH (publish = '");
if (pubinfo->pubinsert)
{
@@ -3967,6 +3984,15 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
first = false;
}
+ if (pubinfo->pubsequence)
+ {
+ if (!first)
+ appendPQExpBufferStr(query, ", ");
+
+ appendPQExpBufferStr(query, "sequence");
+ first = false;
+ }
+
appendPQExpBufferStr(query, "'");
if (pubinfo->pubviaroot)
@@ -4013,6 +4039,7 @@ getPublicationNamespaces(Archive *fout)
int i_oid;
int i_pnpubid;
int i_pnnspid;
+ int i_pntype;
int i,
j,
ntups;
@@ -4024,7 +4051,7 @@ getPublicationNamespaces(Archive *fout)
/* Collect all publication membership info. */
appendPQExpBufferStr(query,
- "SELECT tableoid, oid, pnpubid, pnnspid "
+ "SELECT tableoid, oid, pnpubid, pnnspid, pntype "
"FROM pg_catalog.pg_publication_namespace");
res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4034,6 +4061,7 @@ getPublicationNamespaces(Archive *fout)
i_oid = PQfnumber(res, "oid");
i_pnpubid = PQfnumber(res, "pnpubid");
i_pnnspid = PQfnumber(res, "pnnspid");
+ i_pntype = PQfnumber(res, "pntype");
/* this allocation may be more than we need */
pubsinfo = pg_malloc(ntups * sizeof(PublicationSchemaInfo));
@@ -4043,6 +4071,7 @@ getPublicationNamespaces(Archive *fout)
{
Oid pnpubid = atooid(PQgetvalue(res, i, i_pnpubid));
Oid pnnspid = atooid(PQgetvalue(res, i, i_pnnspid));
+ char pntype = PQgetvalue(res, i, i_pntype)[0];
PublicationInfo *pubinfo;
NamespaceInfo *nspinfo;
@@ -4074,6 +4103,7 @@ getPublicationNamespaces(Archive *fout)
pubsinfo[j].dobj.name = nspinfo->dobj.name;
pubsinfo[j].publication = pubinfo;
pubsinfo[j].pubschema = nspinfo;
+ pubsinfo[j].pubtype = pntype;
/* Decide whether we want to dump it */
selectDumpablePublicationObject(&(pubsinfo[j].dobj), fout);
@@ -4207,7 +4237,11 @@ dumpPublicationNamespace(Archive *fout, const PublicationSchemaInfo *pubsinfo)
query = createPQExpBuffer();
appendPQExpBuffer(query, "ALTER PUBLICATION %s ", fmtId(pubinfo->dobj.name));
- appendPQExpBuffer(query, "ADD ALL TABLES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
+
+ if (pubsinfo->pubtype == 't')
+ appendPQExpBuffer(query, "ADD ALL TABLES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
+ else
+ appendPQExpBuffer(query, "ADD ALL SEQUENCES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
/*
* There is no point in creating drop query as the drop is done by schema
@@ -4240,6 +4274,7 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
TableInfo *tbinfo = pubrinfo->pubtable;
PQExpBuffer query;
char *tag;
+ char *description;
/* Do nothing in data-only dump */
if (dopt->dataOnly)
@@ -4249,10 +4284,22 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
query = createPQExpBuffer();
- appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD TABLE ONLY",
- fmtId(pubinfo->dobj.name));
+ if (tbinfo->relkind == RELKIND_SEQUENCE)
+ {
+ appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD SEQUENCE",
+ fmtId(pubinfo->dobj.name));
+ description = "PUBLICATION SEQUENCE";
+ }
+ else
+ {
+ appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD TABLE ONLY",
+ fmtId(pubinfo->dobj.name));
+ description = "PUBLICATION TABLE";
+ }
+
appendPQExpBuffer(query, " %s",
fmtQualifiedDumpable(tbinfo));
+
if (pubrinfo->pubrelqual)
{
/*
@@ -4275,7 +4322,7 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
ARCHIVE_OPTS(.tag = tag,
.namespace = tbinfo->dobj.namespace->dobj.name,
.owner = pubinfo->rolname,
- .description = "PUBLICATION TABLE",
+ .description = description,
.section = SECTION_POST_DATA,
.createStmt = query->data));
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 772dc0cf7a2..893725d121f 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -615,10 +615,12 @@ typedef struct _PublicationInfo
DumpableObject dobj;
const char *rolname;
bool puballtables;
+ bool puballsequences;
bool pubinsert;
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
bool pubviaroot;
} PublicationInfo;
@@ -643,6 +645,7 @@ typedef struct _PublicationSchemaInfo
DumpableObject dobj;
PublicationInfo *publication;
NamespaceInfo *pubschema;
+ char pubtype;
} PublicationSchemaInfo;
/*
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index fd1052e5db8..19f908f6006 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2358,7 +2358,7 @@ my %tests = (
create_order => 50,
create_sql => 'CREATE PUBLICATION pub1;',
regexp => qr/^
- \QCREATE PUBLICATION pub1 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub1 WITH (publish = 'insert, update, delete, truncate, sequence');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
@@ -2378,16 +2378,27 @@ my %tests = (
create_order => 50,
create_sql => 'CREATE PUBLICATION pub3;',
regexp => qr/^
- \QCREATE PUBLICATION pub3 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub3 WITH (publish = 'insert, update, delete, truncate, sequence');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
'CREATE PUBLICATION pub4' => {
create_order => 50,
- create_sql => 'CREATE PUBLICATION pub4;',
+ create_sql => 'CREATE PUBLICATION pub4
+ FOR ALL SEQUENCES
+ WITH (publish = \'\');',
regexp => qr/^
- \QCREATE PUBLICATION pub4 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub4 FOR ALL SEQUENCES WITH (publish = '');\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ },
+
+ 'CREATE PUBLICATION pub5' => {
+ create_order => 50,
+ create_sql => 'CREATE PUBLICATION pub5;',
+ regexp => qr/^
+ \QCREATE PUBLICATION pub5 WITH (publish = 'insert, update, delete, truncate, sequence');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
@@ -2474,6 +2485,27 @@ my %tests = (
unlike => { exclude_dump_test_schema => 1, },
},
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test' => {
+ create_order => 51,
+ create_sql =>
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test;',
+ regexp => qr/^
+ \QALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test;\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
+ },
+
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public' => {
+ create_order => 52,
+ create_sql =>
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public;',
+ regexp => qr/^
+ \QALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public;\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ },
+
'CREATE SCHEMA public' => {
regexp => qr/^CREATE SCHEMA public;/m,
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 991bfc1546b..a0ed537f8eb 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1633,28 +1633,19 @@ describeOneTableDetails(const char *schemaname,
if (tableinfo.relkind == RELKIND_SEQUENCE)
{
PGresult *result = NULL;
- printQueryOpt myopt = pset.popt;
- char *footers[2] = {NULL, NULL};
if (pset.sversion >= 100000)
{
printfPQExpBuffer(&buf,
- "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
- " seqstart AS \"%s\",\n"
- " seqmin AS \"%s\",\n"
- " seqmax AS \"%s\",\n"
- " seqincrement AS \"%s\",\n"
- " CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
- " seqcache AS \"%s\"\n",
- gettext_noop("Type"),
- gettext_noop("Start"),
- gettext_noop("Minimum"),
- gettext_noop("Maximum"),
- gettext_noop("Increment"),
+ "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+ " seqstart,\n"
+ " seqmin,\n"
+ " seqmax,\n"
+ " seqincrement,\n"
+ " CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+ " seqcache\n",
gettext_noop("yes"),
- gettext_noop("no"),
- gettext_noop("Cycles?"),
- gettext_noop("Cache"));
+ gettext_noop("no"));
appendPQExpBuffer(&buf,
"FROM pg_catalog.pg_sequence\n"
"WHERE seqrelid = '%s';",
@@ -1663,22 +1654,15 @@ describeOneTableDetails(const char *schemaname,
else
{
printfPQExpBuffer(&buf,
- "SELECT 'bigint' AS \"%s\",\n"
- " start_value AS \"%s\",\n"
- " min_value AS \"%s\",\n"
- " max_value AS \"%s\",\n"
- " increment_by AS \"%s\",\n"
- " CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
- " cache_value AS \"%s\"\n",
- gettext_noop("Type"),
- gettext_noop("Start"),
- gettext_noop("Minimum"),
- gettext_noop("Maximum"),
- gettext_noop("Increment"),
+ "SELECT 'bigint',\n"
+ " start_value,\n"
+ " min_value,\n"
+ " max_value,\n"
+ " increment_by,\n"
+ " CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+ " cache_value\n",
gettext_noop("yes"),
- gettext_noop("no"),
- gettext_noop("Cycles?"),
- gettext_noop("Cache"));
+ gettext_noop("no"));
appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
/* must be separate because fmtId isn't reentrant */
appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1688,6 +1672,51 @@ describeOneTableDetails(const char *schemaname,
if (!res)
goto error_return;
+ numrows = PQntuples(res);
+
+ /* XXX reset to use expanded output for sequences (maybe we should
+ * keep this disabled, just like for tables?) */
+ myopt.expanded = pset.popt.topt.expanded;
+
+ printTableInit(&cont, &myopt, title.data, 7, numrows);
+ printTableInitialized = true;
+
+ printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+ schemaname, relationname);
+
+ printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+ printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+ printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+ /* Generate table cells to be printed */
+ for (i = 0; i < numrows; i++)
+ {
+ /* Type */
+ printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+ /* Start */
+ printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+ /* Minimum */
+ printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+ /* Maximum */
+ printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+ /* Increment */
+ printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+ /* Cycles? */
+ printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+ /* Cache */
+ printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+ }
+
/* Footer information about a sequence */
/* Get the column that owns this sequence */
@@ -1721,29 +1750,63 @@ describeOneTableDetails(const char *schemaname,
switch (PQgetvalue(result, 0, 1)[0])
{
case 'a':
- footers[0] = psprintf(_("Owned by: %s"),
- PQgetvalue(result, 0, 0));
+ printTableAddFooter(&cont,
+ psprintf(_("Owned by: %s"),
+ PQgetvalue(result, 0, 0)));
break;
case 'i':
- footers[0] = psprintf(_("Sequence for identity column: %s"),
- PQgetvalue(result, 0, 0));
+ printTableAddFooter(&cont,
+ psprintf(_("Sequence for identity column: %s"),
+ PQgetvalue(result, 0, 0)));
break;
}
}
PQclear(result);
- printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
- schemaname, relationname);
+ /* print any publications */
+ if (pset.sversion >= 150000)
+ {
+ int tuples = 0;
- myopt.footers = footers;
- myopt.topt.default_footer = false;
- myopt.title = title.data;
- myopt.translate_header = true;
+ printfPQExpBuffer(&buf,
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ " JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
+ " JOIN pg_catalog.pg_class pc ON pc.relnamespace = pn.pnnspid\n"
+ "WHERE pc.oid ='%s' and pn.pntype = 's' and pg_catalog.pg_relation_is_publishable('%s')\n"
+ "UNION\n"
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ " JOIN pg_catalog.pg_publication_rel pr ON p.oid = pr.prpubid\n"
+ "WHERE pr.prrelid = '%s'\n"
+ "UNION\n"
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+ "ORDER BY 1;",
+ oid, oid, oid, oid);
- printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+ result = PSQLexec(buf.data);
+ if (!result)
+ goto error_return;
+ else
+ tuples = PQntuples(result);
+
+ if (tuples > 0)
+ printTableAddFooter(&cont, _("Publications:"));
+
+ /* Might be an empty set - that's ok */
+ for (i = 0; i < tuples; i++)
+ {
+ printfPQExpBuffer(&buf, " \"%s\"",
+ PQgetvalue(result, i, 0));
+
+ printTableAddFooter(&cont, buf.data);
+ }
+ PQclear(result);
+ }
- if (footers[0])
- free(footers[0]);
+ printTable(&cont, pset.queryFout, false, pset.logfile);
retval = true;
goto error_return; /* not an error, just return early */
@@ -1970,6 +2033,11 @@ describeOneTableDetails(const char *schemaname,
for (i = 0; i < cols; i++)
printTableAddHeader(&cont, headers[i], true, 'l');
+ res = PSQLexec(buf.data);
+ if (!res)
+ goto error_return;
+ numrows = PQntuples(res);
+
/* Generate table cells to be printed */
for (i = 0; i < numrows; i++)
{
@@ -2895,7 +2963,7 @@ describeOneTableDetails(const char *schemaname,
"FROM pg_catalog.pg_publication p\n"
" JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
" JOIN pg_catalog.pg_class pc ON pc.relnamespace = pn.pnnspid\n"
- "WHERE pc.oid ='%s' and pg_catalog.pg_relation_is_publishable('%s')\n"
+ "WHERE pc.oid ='%s' and pn.pntype = 't' and pg_catalog.pg_relation_is_publishable('%s')\n"
"UNION\n"
"SELECT pubname\n"
" , pg_get_expr(pr.prqual, c.oid)\n"
@@ -4785,7 +4853,7 @@ listSchemas(const char *pattern, bool verbose, bool showSystem)
int i;
printfPQExpBuffer(&buf,
- "SELECT pubname \n"
+ "SELECT pubname, (CASE WHEN pntype = 't' THEN 'tables' ELSE 'sequences' END) AS pubtype\n"
"FROM pg_catalog.pg_publication p\n"
" JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
" JOIN pg_catalog.pg_namespace n ON n.oid = pn.pnnspid \n"
@@ -4814,8 +4882,9 @@ listSchemas(const char *pattern, bool verbose, bool showSystem)
/* Might be an empty set - that's ok */
for (i = 0; i < pub_schema_tuples; i++)
{
- printfPQExpBuffer(&buf, " \"%s\"",
- PQgetvalue(result, i, 0));
+ printfPQExpBuffer(&buf, " \"%s\" (%s)",
+ PQgetvalue(result, i, 0),
+ PQgetvalue(result, i, 1));
footers[i + 1] = pg_strdup(buf.data);
}
@@ -5820,7 +5889,7 @@ listPublications(const char *pattern)
PQExpBufferData buf;
PGresult *res;
printQueryOpt myopt = pset.popt;
- static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+ static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
if (pset.sversion < 100000)
{
@@ -5834,23 +5903,45 @@ listPublications(const char *pattern)
initPQExpBuffer(&buf);
- printfPQExpBuffer(&buf,
- "SELECT pubname AS \"%s\",\n"
- " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
- " puballtables AS \"%s\",\n"
- " pubinsert AS \"%s\",\n"
- " pubupdate AS \"%s\",\n"
- " pubdelete AS \"%s\"",
- gettext_noop("Name"),
- gettext_noop("Owner"),
- gettext_noop("All tables"),
- gettext_noop("Inserts"),
- gettext_noop("Updates"),
- gettext_noop("Deletes"));
+ if (pset.sversion >= 150000)
+ printfPQExpBuffer(&buf,
+ "SELECT pubname AS \"%s\",\n"
+ " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+ " puballtables AS \"%s\",\n"
+ " puballsequences AS \"%s\",\n"
+ " pubinsert AS \"%s\",\n"
+ " pubupdate AS \"%s\",\n"
+ " pubdelete AS \"%s\"",
+ gettext_noop("Name"),
+ gettext_noop("Owner"),
+ gettext_noop("All tables"),
+ gettext_noop("All sequences"),
+ gettext_noop("Inserts"),
+ gettext_noop("Updates"),
+ gettext_noop("Deletes"));
+ else
+ printfPQExpBuffer(&buf,
+ "SELECT pubname AS \"%s\",\n"
+ " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+ " puballtables AS \"%s\",\n"
+ " pubinsert AS \"%s\",\n"
+ " pubupdate AS \"%s\",\n"
+ " pubdelete AS \"%s\"",
+ gettext_noop("Name"),
+ gettext_noop("Owner"),
+ gettext_noop("All tables"),
+ gettext_noop("Inserts"),
+ gettext_noop("Updates"),
+ gettext_noop("Deletes"));
+
if (pset.sversion >= 110000)
appendPQExpBuffer(&buf,
",\n pubtruncate AS \"%s\"",
gettext_noop("Truncates"));
+ if (pset.sversion >= 150000)
+ appendPQExpBuffer(&buf,
+ ",\n pubsequence AS \"%s\"",
+ gettext_noop("Sequences"));
if (pset.sversion >= 130000)
appendPQExpBuffer(&buf,
",\n pubviaroot AS \"%s\"",
@@ -5936,6 +6027,7 @@ describePublications(const char *pattern)
PGresult *res;
bool has_pubtruncate;
bool has_pubviaroot;
+ bool has_pubsequence;
PQExpBufferData title;
printTableContent cont;
@@ -5952,6 +6044,7 @@ describePublications(const char *pattern)
has_pubtruncate = (pset.sversion >= 110000);
has_pubviaroot = (pset.sversion >= 130000);
+ has_pubsequence = (pset.sversion >= 150000);
initPQExpBuffer(&buf);
@@ -5959,12 +6052,17 @@ describePublications(const char *pattern)
"SELECT oid, pubname,\n"
" pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
" puballtables, pubinsert, pubupdate, pubdelete");
+
if (has_pubtruncate)
appendPQExpBufferStr(&buf,
", pubtruncate");
if (has_pubviaroot)
appendPQExpBufferStr(&buf,
", pubviaroot");
+ if (has_pubsequence)
+ appendPQExpBufferStr(&buf,
+ ", puballsequences, pubsequence");
+
appendPQExpBufferStr(&buf,
"\nFROM pg_catalog.pg_publication\n");
@@ -6005,6 +6103,7 @@ describePublications(const char *pattern)
char *pubid = PQgetvalue(res, i, 0);
char *pubname = PQgetvalue(res, i, 1);
bool puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
+ bool puballsequences = strcmp(PQgetvalue(res, i, 9), "t") == 0;
printTableOpt myopt = pset.popt.topt;
if (has_pubtruncate)
@@ -6012,29 +6111,43 @@ describePublications(const char *pattern)
if (has_pubviaroot)
ncols++;
+ /* sequences have two extra columns (puballsequences, pubsequences) */
+ if (has_pubsequence)
+ ncols += 2;
+
initPQExpBuffer(&title);
printfPQExpBuffer(&title, _("Publication %s"), pubname);
printTableInit(&cont, &myopt, title.data, ncols, nrows);
printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+ if (has_pubsequence)
+ printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
if (has_pubtruncate)
printTableAddHeader(&cont, gettext_noop("Truncates"), true, align);
+ if (has_pubsequence)
+ printTableAddHeader(&cont, gettext_noop("Sequences"), true, align);
if (has_pubviaroot)
printTableAddHeader(&cont, gettext_noop("Via root"), true, align);
- printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false); /* owner */
+ printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false); /* all tables */
+
+ if (has_pubsequence)
+ printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false); /* all sequences */
+
+ printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false); /* insert */
+ printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false); /* update */
+ printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false); /* delete */
if (has_pubtruncate)
- printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false); /* truncate */
+ if (has_pubsequence)
+ printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false); /* sequence */
if (has_pubviaroot)
- printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false); /* via root */
if (!puballtables)
{
@@ -6054,6 +6167,7 @@ describePublications(const char *pattern)
"WHERE c.relnamespace = n.oid\n"
" AND c.oid = pr.prrelid\n"
" AND pr.prpubid = '%s'\n"
+ " AND c.relkind != 'S'\n" /* exclude sequences */
"ORDER BY 1,2", pubid);
if (!addFooterToPublicationDesc(&buf, "Tables:", false, &cont))
goto error_return;
@@ -6065,7 +6179,7 @@ describePublications(const char *pattern)
"SELECT n.nspname\n"
"FROM pg_catalog.pg_namespace n\n"
" JOIN pg_catalog.pg_publication_namespace pn ON n.oid = pn.pnnspid\n"
- "WHERE pn.pnpubid = '%s'\n"
+ "WHERE pn.pnpubid = '%s' AND pn.pntype = 't'\n"
"ORDER BY 1", pubid);
if (!addFooterToPublicationDesc(&buf, "Tables from schemas:",
true, &cont))
@@ -6073,6 +6187,37 @@ describePublications(const char *pattern)
}
}
+ if (!puballsequences)
+ {
+ /* Get the tables for the specified publication */
+ printfPQExpBuffer(&buf,
+ "SELECT n.nspname, c.relname, NULL\n"
+ "FROM pg_catalog.pg_class c,\n"
+ " pg_catalog.pg_namespace n,\n"
+ " pg_catalog.pg_publication_rel pr\n"
+ "WHERE c.relnamespace = n.oid\n"
+ " AND c.oid = pr.prrelid\n"
+ " AND pr.prpubid = '%s'\n"
+ " AND c.relkind = 'S'\n" /* only sequences */
+ "ORDER BY 1,2", pubid);
+ if (!addFooterToPublicationDesc(&buf, "Sequences:", false, &cont))
+ goto error_return;
+
+ if (pset.sversion >= 150000)
+ {
+ /* Get the schemas for the specified publication */
+ printfPQExpBuffer(&buf,
+ "SELECT n.nspname\n"
+ "FROM pg_catalog.pg_namespace n\n"
+ " JOIN pg_catalog.pg_publication_namespace pn ON n.oid = pn.pnnspid\n"
+ "WHERE pn.pnpubid = '%s' AND pn.pntype = 's'\n"
+ "ORDER BY 1", pubid);
+ if (!addFooterToPublicationDesc(&buf, "Sequences from schemas:",
+ true, &cont))
+ goto error_return;
+ }
+ }
+
printTable(&cont, pset.queryFout, false, pset.logfile);
printTableCleanup(&cont);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 183abcc2753..be9cfe0fbb7 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1815,11 +1815,15 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") ||
(HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") &&
ends_with(prev_wd, ',')))
COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "SEQUENCE") ||
+ (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "SEQUENCE") &&
+ ends_with(prev_wd, ',')))
+ COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_sequences);
/*
* "ALTER PUBLICATION <name> SET TABLE <name> WHERE (" - complete with
* table attributes
@@ -1838,11 +1842,11 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH(",");
/* ALTER PUBLICATION <name> DROP */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
- COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE");
- else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
+ COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES|SEQUENCES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY_PLUS(Query_for_list_of_schemas
" AND nspname NOT LIKE E'pg\\\\_%%'",
"CURRENT_SCHEMA");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d8e8715ed1c..699bd0aa3e3 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11540,6 +11540,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index fe773cf9b7d..97f26208e1d 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct PublicationDesc
@@ -92,6 +102,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -122,14 +133,15 @@ typedef enum PublicationPartOpt
PUBLICATION_PART_ALL,
} PublicationPartOpt;
-extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
+extern List *GetPublicationRelations(Oid pubid, char objectType,
+ PublicationPartOpt pub_partopt);
extern List *GetAllTablesPublications(void);
extern List *GetAllTablesPublicationRelations(bool pubviaroot);
-extern List *GetPublicationSchemas(Oid pubid);
-extern List *GetSchemaPublications(Oid schemaid);
-extern List *GetSchemaPublicationRelations(Oid schemaid,
+extern List *GetPublicationSchemas(Oid pubid, char objectType);
+extern List *GetSchemaPublications(Oid schemaid, char objectType);
+extern List *GetSchemaPublicationRelations(Oid schemaid, char objectType,
PublicationPartOpt pub_partopt);
-extern List *GetAllSchemaPublicationRelations(Oid puboid,
+extern List *GetAllSchemaPublicationRelations(Oid puboid, char objectType,
PublicationPartOpt pub_partopt);
extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
@@ -137,11 +149,15 @@ extern List *GetPubPartitionOptionRelations(List *result,
extern Oid GetTopMostAncestorInPublication(Oid puboid, List *ancestors,
int *ancestor_level);
+extern List *GetAllSequencesPublications(void);
+extern List *GetAllSequencesPublicationRelations(void);
+
extern bool is_publishable_relation(Relation rel);
extern bool is_schema_publication(Oid pubid);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *pri,
bool if_not_exists);
extern ObjectAddress publication_add_schema(Oid pubid, Oid schemaid,
+ char objectType,
bool if_not_exists);
extern Oid get_publication_oid(const char *pubname, bool missing_ok);
diff --git a/src/include/catalog/pg_publication_namespace.h b/src/include/catalog/pg_publication_namespace.h
index e4306da02e7..7340a1ec646 100644
--- a/src/include/catalog/pg_publication_namespace.h
+++ b/src/include/catalog/pg_publication_namespace.h
@@ -32,6 +32,7 @@ CATALOG(pg_publication_namespace,8901,PublicationNamespaceRelationId)
Oid oid; /* oid */
Oid pnpubid BKI_LOOKUP(pg_publication); /* Oid of the publication */
Oid pnnspid BKI_LOOKUP(pg_namespace); /* Oid of the schema */
+ char pntype; /* object type to include */
} FormData_pg_publication_namespace;
/* ----------------
@@ -42,6 +43,13 @@ CATALOG(pg_publication_namespace,8901,PublicationNamespaceRelationId)
typedef FormData_pg_publication_namespace *Form_pg_publication_namespace;
DECLARE_UNIQUE_INDEX_PKEY(pg_publication_namespace_oid_index, 8902, PublicationNamespaceObjectIndexId, on pg_publication_namespace using btree(oid oid_ops));
-DECLARE_UNIQUE_INDEX(pg_publication_namespace_pnnspid_pnpubid_index, 8903, PublicationNamespacePnnspidPnpubidIndexId, on pg_publication_namespace using btree(pnnspid oid_ops, pnpubid oid_ops));
+DECLARE_UNIQUE_INDEX(pg_publication_namespace_pnnspid_pnpubid_pntype_index, 8903, PublicationNamespacePnnspidPnpubidPntypeIndexId, on pg_publication_namespace using btree(pnnspid oid_ops, pnpubid oid_ops, pntype char_ops));
+
+/* object type to include from a schema, maps to relkind */
+#define PUB_OBJTYPE_TABLE 't' /* table (regular or partitioned) */
+#define PUB_OBJTYPE_SEQUENCE 's' /* sequence object */
+#define PUB_OBJTYPE_UNSUPPORTED 'u' /* used for non-replicated types */
+
+extern char pub_get_object_type_for_relkind(char relkind);
#endif /* PG_PUBLICATION_NAMESPACE_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9fecc41954e..cd4002be291 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 1617702d9d6..12da287908f 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3663,6 +3663,10 @@ typedef enum PublicationObjSpecType
PUBLICATIONOBJ_TABLES_IN_SCHEMA, /* All tables in schema */
PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA, /* All tables in first element of
* search_path */
+ PUBLICATIONOBJ_SEQUENCE, /* Sequence type */
+ PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA, /* Sequences in schema type */
+ PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA, /* Get the first element of
+ * search_path */
PUBLICATIONOBJ_CONTINUATION /* Continuation of previous type */
} PublicationObjSpecType;
@@ -3681,7 +3685,7 @@ typedef struct CreatePublicationStmt
char *pubname; /* Name of the publication */
List *options; /* List of DefElem nodes */
List *pubobjects; /* Optional list of publication objects */
- bool for_all_tables; /* Special publication for all tables in db */
+ List *for_all_objects; /* Special publication for all objects in db */
} CreatePublicationStmt;
typedef enum AlterPublicationAction
@@ -3704,7 +3708,7 @@ typedef struct AlterPublicationStmt
* objects.
*/
List *pubobjects; /* Optional list of publication objects */
- bool for_all_tables; /* Special publication for all tables in db */
+ List *for_all_objects; /* Special publication for all objects in db */
AlterPublicationAction action; /* What action to perform with the given
* objects */
} AlterPublicationStmt;
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 4d2c881644a..415757d8a2d 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -61,6 +61,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -118,6 +119,18 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -230,6 +243,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index eafedd610a5..f4e9f35d09d 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -29,6 +29,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/test/regress/expected/object_address.out b/src/test/regress/expected/object_address.out
index 4117fc27c9a..c95d44b3db9 100644
--- a/src/test/regress/expected/object_address.out
+++ b/src/test/regress/expected/object_address.out
@@ -46,6 +46,7 @@ CREATE TRANSFORM FOR int LANGUAGE SQL (
SET client_min_messages = 'ERROR';
CREATE PUBLICATION addr_pub FOR TABLE addr_nsp.gentable;
CREATE PUBLICATION addr_pub_schema FOR ALL TABLES IN SCHEMA addr_nsp;
+CREATE PUBLICATION addr_pub_schema2 FOR ALL SEQUENCES IN SCHEMA addr_nsp;
RESET client_min_messages;
CREATE SUBSCRIPTION regress_addr_sub CONNECTION '' PUBLICATION bar WITH (connect = false, slot_name = NONE);
WARNING: tables were not subscribed, you will have to run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to subscribe the tables
@@ -428,7 +429,8 @@ WITH objects (type, name, args) AS (VALUES
('transform', '{int}', '{sql}'),
('access method', '{btree}', '{}'),
('publication', '{addr_pub}', '{}'),
- ('publication namespace', '{addr_nsp}', '{addr_pub_schema}'),
+ ('publication namespace', '{addr_nsp}', '{addr_pub_schema,t}'),
+ ('publication namespace', '{addr_nsp}', '{addr_pub_schema2,s}'),
('publication relation', '{addr_nsp, gentable}', '{addr_pub}'),
('subscription', '{regress_addr_sub}', '{}'),
('statistics object', '{addr_nsp, gentable_stat}', '{}')
@@ -492,8 +494,9 @@ SELECT (pg_identify_object(addr1.classid, addr1.objid, addr1.objsubid)).*,
subscription | | regress_addr_sub | regress_addr_sub | t
publication | | addr_pub | addr_pub | t
publication relation | | | addr_nsp.gentable in publication addr_pub | t
- publication namespace | | | addr_nsp in publication addr_pub_schema | t
-(50 rows)
+ publication namespace | | | addr_nsp in publication addr_pub_schema type t | t
+ publication namespace | | | addr_nsp in publication addr_pub_schema2 type s | t
+(51 rows)
---
--- Cleanup resources
@@ -506,6 +509,7 @@ drop cascades to server integer
drop cascades to user mapping for regress_addr_user on server integer
DROP PUBLICATION addr_pub;
DROP PUBLICATION addr_pub_schema;
+DROP PUBLICATION addr_pub_schema2;
DROP SUBSCRIPTION regress_addr_sub;
DROP SCHEMA addr_nsp CASCADE;
NOTICE: drop cascades to 14 other objects
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4e191c120ac..58214ee742e 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR: conflicting or redundant options
LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
^
\dRp
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f | t | f | f | f | f
- testpub_default | regress_publication_user | f | f | t | f | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f | f | t | f | f | f | f | f
+ testpub_default | regress_publication_user | f | f | f | t | f | f | f | f
(2 rows)
-ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
+ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete, sequence');
\dRp
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f | t | f | f | f | f
- testpub_default | regress_publication_user | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f | f | t | f | f | f | f | f
+ testpub_default | regress_publication_user | f | f | t | t | t | f | t | f
(2 rows)
--- adding tables
@@ -61,6 +61,9 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
ERROR: publication "testpub_foralltables" is defined as FOR ALL TABLES
DETAIL: Tables cannot be added to or dropped from FOR ALL TABLES publications.
+-- fail - can't add a table using ADD SEQUENCE command
+ALTER PUBLICATION testpub_foralltables ADD SEQUENCE testpub_tbl2;
+ERROR: object type mismatch
-- fail - can't drop from all tables publication
ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
ERROR: publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -87,10 +90,10 @@ RESET client_min_messages;
-- should be able to add schema to 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable ADD ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl1"
Tables from schemas:
@@ -99,20 +102,20 @@ Tables from schemas:
-- should be able to drop schema from 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable DROP ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl1"
-- should be able to set schema to 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable SET ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test"
@@ -134,10 +137,10 @@ ERROR: relation "testpub_nopk" is not part of the publication
-- should be able to set table to schema publication
ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
\dRp+ testpub_forschema
- Publication testpub_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test.testpub_nopk"
@@ -159,10 +162,10 @@ Publications:
"testpub_foralltables"
\dRp+ testpub_foralltables
- Publication testpub_foralltables
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t | t | t | f | f | f
+ Publication testpub_foralltables
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | t | f | t | t | f | f | f | f
(1 row)
DROP TABLE testpub_tbl2;
@@ -174,24 +177,527 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
RESET client_min_messages;
\dRp+ testpub3
- Publication testpub3
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl3"
"public.testpub_tbl3a"
\dRp+ testpub4
- Publication testpub4
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub4
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl3"
DROP TABLE testpub_tbl3, testpub_tbl3a;
DROP PUBLICATION testpub3, testpub4;
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES WITH (publish = 'sequence');
+RESET client_min_messages;
+ALTER PUBLICATION testpub_forallsequences SET (publish = 'insert, sequence');
+CREATE SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences ADD SEQUENCE testpub_seq2;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't drop from all sequences publication
+ALTER PUBLICATION testpub_forallsequences DROP SEQUENCE testpub_seq2;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences SET SEQUENCE pub_test.testpub_seq1;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't add schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences ADD ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+-- fail - can't drop schema from 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences DROP ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+-- fail - can't set schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences SET ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forsequence FOR SEQUENCE testpub_seq0;
+RESET client_min_messages;
+-- should be able to add schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence ADD ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "public.testpub_seq0"
+Sequences from schemas:
+ "pub_test"
+
+-- fail - can't add sequence from the schema we already added
+ALTER PUBLICATION testpub_forsequence ADD SEQUENCE pub_test.testpub_seq1;
+ERROR: cannot add relation "pub_test.testpub_seq1" to publication
+DETAIL: Sequence's schema "pub_test" is already part of the publication or part of the specified schema list.
+-- fail - can't add sequence using ADD TABLE command
+ALTER PUBLICATION testpub_forsequence ADD TABLE pub_test.testpub_seq1;
+ERROR: object type mismatch
+-- should be able to drop schema from 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "public.testpub_seq0"
+
+-- should be able to set schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence SET ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences from schemas:
+ "pub_test"
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forschema FOR ALL SEQUENCES IN SCHEMA pub_test;
+RESET client_min_messages;
+-- fail - can't create publication with schema and sequence of the same schema
+CREATE PUBLICATION testpub_for_seq_schema FOR ALL SEQUENCES IN SCHEMA pub_test, SEQUENCE pub_test.testpub_seq1;
+ERROR: cannot add relation "pub_test.testpub_seq1" to publication
+DETAIL: Sequence's schema "pub_test" is already part of the publication or part of the specified schema list.
+-- fail - can't add a sequence of the same schema to the schema publication
+ALTER PUBLICATION testpub_forschema ADD SEQUENCE pub_test.testpub_seq1;
+ERROR: cannot add relation "pub_test.testpub_seq1" to publication
+DETAIL: Sequence's schema "pub_test" is already part of the publication or part of the specified schema list.
+-- fail - can't drop a sequence from the schema publication which isn't in the
+-- publication
+ALTER PUBLICATION testpub_forschema DROP SEQUENCE pub_test.testpub_seq1;
+ERROR: relation "testpub_seq1" is not part of the publication
+-- should be able to set sequence to schema publication
+ALTER PUBLICATION testpub_forschema SET SEQUENCE pub_test.testpub_seq1;
+\dRp+ testpub_forschema
+ Publication testpub_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "pub_test.testpub_seq1"
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+ pubname | puballtables | puballsequences
+-------------------------+--------------+-----------------
+ testpub_forallsequences | f | t
+(1 row)
+
+\d+ pub_test.testpub_seq1
+ Sequence "pub_test.testpub_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_forallsequences"
+ "testpub_forschema"
+ "testpub_forsequence"
+
+\dRp+ testpub_forallsequences
+ Publication testpub_forallsequences
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | t | t | f | f | f | t | f
+(1 row)
+
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1, testpub_seq2;
+DROP PUBLICATION testpub_forallsequences, testpub_forsequence, testpub_forschema;
+-- Publication mixing tables and sequences
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_mix;
+RESET client_min_messages;
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE pub_test.testpub_seq2;
+ALTER PUBLICATION testpub_mix ADD SEQUENCE testpub_seq1, TABLE testpub_tbl1;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Sequences:
+ "public.testpub_seq1"
+
+ALTER PUBLICATION testpub_mix ADD ALL SEQUENCES IN SCHEMA pub_test, ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Tables from schemas:
+ "pub_test"
+Sequences:
+ "public.testpub_seq1"
+Sequences from schemas:
+ "pub_test"
+
+ALTER PUBLICATION testpub_mix DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Tables from schemas:
+ "pub_test"
+Sequences:
+ "public.testpub_seq1"
+
+ALTER PUBLICATION testpub_mix DROP ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Sequences:
+ "public.testpub_seq1"
+
+DROP PUBLICATION testpub_mix;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE pub_test.testpub_seq2;
+-- make sure we replicate only the correct relation type
+CREATE SCHEMA pub_test1;
+CREATE SEQUENCE pub_test1.test_seq1;
+CREATE TABLE pub_test1.test_tbl1 (a int primary key, b int);
+CREATE SCHEMA pub_test2;
+CREATE SEQUENCE pub_test2.test_seq2;
+CREATE TABLE pub_test2.test_tbl2 (a int primary key, b int);
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_schemas;
+RESET client_min_messages;
+-- add tables from one schema, sequences from the other
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test2"
+Sequences from schemas:
+ "pub_test1"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- add the other object type from each schema
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test1;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test2;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+ "pub_test2"
+Sequences from schemas:
+ "pub_test1"
+ "pub_test2"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+ "testpub_schemas" (tables)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+ "testpub_schemas" (sequences)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- now drop the object type added first
+ALTER PUBLICATION testpub_schemas DROP ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas DROP ALL SEQUENCES IN SCHEMA pub_test1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+Sequences from schemas:
+ "pub_test2"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+
+-- should fail (publication contains the whole schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test1.test_tbl1;
+ERROR: cannot add relation "pub_test1.test_tbl1" to publication
+DETAIL: Table's schema "pub_test1" is already part of the publication or part of the specified schema list.
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test2.test_seq2;
+ERROR: cannot add relation "pub_test2.test_seq2" to publication
+DETAIL: Sequence's schema "pub_test2" is already part of the publication or part of the specified schema list.
+-- should work (different schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test1.test_seq1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "pub_test2.test_tbl2"
+Tables from schemas:
+ "pub_test1"
+Sequences:
+ "pub_test1.test_seq1"
+Sequences from schemas:
+ "pub_test2"
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- now drop the explicitly added objects again
+ALTER PUBLICATION testpub_schemas DROP TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas DROP SEQUENCE pub_test1.test_seq1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+Sequences from schemas:
+ "pub_test2"
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+
+DROP PUBLICATION testpub_schemas;
+DROP TABLE pub_test1.test_tbl1, pub_test2.test_tbl2;
+DROP SEQUENCE pub_test1.test_seq1, pub_test2.test_seq2;
+DROP SCHEMA pub_test1, pub_test2;
-- Tests for partitioned tables
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_forparted;
@@ -207,10 +713,10 @@ UPDATE testpub_parted1 SET a = 1;
-- only parent is listed as being in publication, not the partition
ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
\dRp+ testpub_forparted
- Publication testpub_forparted
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forparted
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_parted"
@@ -223,10 +729,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
UPDATE testpub_parted1 SET a = 1;
ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
\dRp+ testpub_forparted
- Publication testpub_forparted
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | t
+ Publication testpub_forparted
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | t
Tables:
"public.testpub_parted"
@@ -255,10 +761,10 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -271,10 +777,10 @@ Tables:
ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -290,10 +796,10 @@ Publications:
ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -301,10 +807,10 @@ Tables:
-- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
@@ -337,10 +843,10 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub_syntax1
- Publication testpub_syntax1
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub_syntax1
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl3" WHERE (e < 999)
@@ -350,10 +856,10 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub_syntax2
- Publication testpub_syntax2
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub_syntax2
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -658,10 +1164,10 @@ ERROR: relation "testpub_tbl1" is already member of publication "testpub_fortbl
CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
ERROR: publication "testpub_fortbl" already exists
\dRp+ testpub_fortbl
- Publication testpub_fortbl
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortbl
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test.testpub_nopk"
"public.testpub_tbl1"
@@ -699,10 +1205,10 @@ Publications:
"testpub_fortbl"
\dRp+ testpub_default
- Publication testpub_default
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | f | f
+ Publication testpub_default
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | f | t | f
Tables:
"pub_test.testpub_nopk"
"public.testpub_tbl1"
@@ -780,10 +1286,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
DROP TABLE testpub_parted;
DROP TABLE testpub_tbl1;
\dRp+ testpub_default
- Publication testpub_default
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | f | f
+ Publication testpub_default
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | f | t | f
(1 row)
-- fail - must be owner of publication
@@ -793,20 +1299,20 @@ ERROR: must be owner of publication testpub_default
RESET ROLE;
ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
\dRp testpub_foo
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpub_foo | regress_publication_user | f | f | t | t | t | f | t | f
(1 row)
-- rename back to keep the rest simple
ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
\dRp testpub_default
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpub_default | regress_publication_user2 | f | f | t | t | t | f | t | f
(1 row)
-- adding schemas and tables
@@ -822,19 +1328,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub1_forschema FOR ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
CREATE PUBLICATION testpub2_forschema FOR ALL TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -848,44 +1354,44 @@ CREATE PUBLICATION testpub6_forschema FOR ALL TABLES IN SCHEMA "CURRENT_SCHEMA",
CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
RESET client_min_messages;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"public"
\dRp+ testpub4_forschema
- Publication testpub4_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub4_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
\dRp+ testpub5_forschema
- Publication testpub5_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub5_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
"public"
\dRp+ testpub6_forschema
- Publication testpub6_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub6_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
"public"
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"CURRENT_SCHEMA.CURRENT_SCHEMA"
@@ -919,10 +1425,10 @@ ERROR: schema "testpub_view" does not exist
-- dropping the schema should reflect the change in publication
DROP SCHEMA pub_test3;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -930,20 +1436,20 @@ Tables from schemas:
-- renaming the schema should reflect the change in publication
ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1_renamed"
"pub_test2"
ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -951,10 +1457,10 @@ Tables from schemas:
-- alter publication add schema
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -963,10 +1469,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -975,10 +1481,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA pub_test1;
ERROR: schema "pub_test1" is already member of publication "testpub1_forschema"
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -986,10 +1492,10 @@ Tables from schemas:
-- alter publication drop schema
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -997,10 +1503,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test2;
ERROR: tables from schema "pub_test2" are not part of the publication
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1008,29 +1514,29 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
-- drop all schemas
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
(1 row)
-- alter publication set multiple schema
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA pub_test1, pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1039,10 +1545,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1051,10 +1557,10 @@ Tables from schemas:
-- removing the duplicate schemas
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA pub_test1, pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1124,18 +1630,18 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub3_forschema;
RESET client_min_messages;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
(1 row)
ALTER PUBLICATION testpub3_forschema SET ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1145,20 +1651,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR ALL TABLES IN SCHEMA pub_test1
CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, ALL TABLES IN SCHEMA pub_test1;
RESET client_min_messages;
\dRp+ testpub_forschema_fortable
- Publication testpub_forschema_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forschema_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test2.tbl1"
Tables from schemas:
"pub_test1"
\dRp+ testpub_fortable_forschema
- Publication testpub_fortable_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test2.tbl1"
Tables from schemas:
@@ -1202,40 +1708,85 @@ CREATE SCHEMA sch1;
CREATE SCHEMA sch2;
CREATE TABLE sch1.tbl1 (a int) PARTITION BY RANGE(a);
CREATE TABLE sch2.tbl1_part1 PARTITION OF sch1.tbl1 FOR VALUES FROM (1) to (10);
+CREATE SEQUENCE sch1.seq1;
+CREATE SEQUENCE sch2.seq2;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+-----------
pub | sch1 | tbl1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
DROP PUBLICATION pub;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+(1 row)
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
@@ -1245,14 +1796,26 @@ SELECT * FROM pg_publication_tables;
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+(0 rows)
+
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
DROP TABLE sch2.tbl1_part1;
DROP TABLE sch1.tbl1;
@@ -1268,9 +1831,81 @@ SELECT * FROM pg_publication_tables;
pub | sch1 | tbl1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+(0 rows)
+
+DROP PUBLICATION pub;
+-- Schema publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR ALL SEQUENCES IN SCHEMA sch2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
+ALTER PUBLICATION pub DROP SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
RESET client_min_messages;
DROP PUBLICATION pub;
DROP TABLE sch1.tbl1;
+DROP SEQUENCE sch1.seq1, sch2.seq2;
DROP SCHEMA sch1 cascade;
DROP SCHEMA sch2 cascade;
RESET SESSION AUTHORIZATION;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index ac468568a1a..ef328d356bb 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1429,6 +1429,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gps.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/regress/sql/object_address.sql b/src/test/regress/sql/object_address.sql
index acd0468a9d9..9d8323468d6 100644
--- a/src/test/regress/sql/object_address.sql
+++ b/src/test/regress/sql/object_address.sql
@@ -49,6 +49,7 @@ CREATE TRANSFORM FOR int LANGUAGE SQL (
SET client_min_messages = 'ERROR';
CREATE PUBLICATION addr_pub FOR TABLE addr_nsp.gentable;
CREATE PUBLICATION addr_pub_schema FOR ALL TABLES IN SCHEMA addr_nsp;
+CREATE PUBLICATION addr_pub_schema2 FOR ALL SEQUENCES IN SCHEMA addr_nsp;
RESET client_min_messages;
CREATE SUBSCRIPTION regress_addr_sub CONNECTION '' PUBLICATION bar WITH (connect = false, slot_name = NONE);
CREATE STATISTICS addr_nsp.gentable_stat ON a, b FROM addr_nsp.gentable;
@@ -198,7 +199,8 @@ WITH objects (type, name, args) AS (VALUES
('transform', '{int}', '{sql}'),
('access method', '{btree}', '{}'),
('publication', '{addr_pub}', '{}'),
- ('publication namespace', '{addr_nsp}', '{addr_pub_schema}'),
+ ('publication namespace', '{addr_nsp}', '{addr_pub_schema,t}'),
+ ('publication namespace', '{addr_nsp}', '{addr_pub_schema2,s}'),
('publication relation', '{addr_nsp, gentable}', '{addr_pub}'),
('subscription', '{regress_addr_sub}', '{}'),
('statistics object', '{addr_nsp, gentable_stat}', '{}')
@@ -218,6 +220,7 @@ SELECT (pg_identify_object(addr1.classid, addr1.objid, addr1.objsubid)).*,
DROP FOREIGN DATA WRAPPER addr_fdw CASCADE;
DROP PUBLICATION addr_pub;
DROP PUBLICATION addr_pub_schema;
+DROP PUBLICATION addr_pub_schema2;
DROP SUBSCRIPTION regress_addr_sub;
DROP SCHEMA addr_nsp CASCADE;
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 5457c56b33f..c195e75c6f0 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -27,7 +27,7 @@ CREATE PUBLICATION testpub_xxx WITH (publish_via_partition_root = 'true', publis
\dRp
-ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
+ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete, sequence');
\dRp
@@ -46,6 +46,8 @@ ALTER PUBLICATION testpub_foralltables SET (publish = 'insert, update');
CREATE TABLE testpub_tbl2 (id serial primary key, data text);
-- fail - can't add to for all tables publication
ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
+-- fail - can't add a table using ADD SEQUENCE command
+ALTER PUBLICATION testpub_foralltables ADD SEQUENCE testpub_tbl2;
-- fail - can't drop from all tables publication
ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
-- fail - can't add to for all tables publication
@@ -104,6 +106,183 @@ RESET client_min_messages;
DROP TABLE testpub_tbl3, testpub_tbl3a;
DROP PUBLICATION testpub3, testpub4;
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES WITH (publish = 'sequence');
+RESET client_min_messages;
+ALTER PUBLICATION testpub_forallsequences SET (publish = 'insert, sequence');
+
+CREATE SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences ADD SEQUENCE testpub_seq2;
+-- fail - can't drop from all sequences publication
+ALTER PUBLICATION testpub_forallsequences DROP SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences SET SEQUENCE pub_test.testpub_seq1;
+
+-- fail - can't add schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences ADD ALL SEQUENCES IN SCHEMA pub_test;
+-- fail - can't drop schema from 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences DROP ALL SEQUENCES IN SCHEMA pub_test;
+-- fail - can't set schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences SET ALL SEQUENCES IN SCHEMA pub_test;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forsequence FOR SEQUENCE testpub_seq0;
+RESET client_min_messages;
+-- should be able to add schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence ADD ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+-- fail - can't add sequence from the schema we already added
+ALTER PUBLICATION testpub_forsequence ADD SEQUENCE pub_test.testpub_seq1;
+-- fail - can't add sequence using ADD TABLE command
+ALTER PUBLICATION testpub_forsequence ADD TABLE pub_test.testpub_seq1;
+-- should be able to drop schema from 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+-- should be able to set schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence SET ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forschema FOR ALL SEQUENCES IN SCHEMA pub_test;
+RESET client_min_messages;
+-- fail - can't create publication with schema and sequence of the same schema
+CREATE PUBLICATION testpub_for_seq_schema FOR ALL SEQUENCES IN SCHEMA pub_test, SEQUENCE pub_test.testpub_seq1;
+-- fail - can't add a sequence of the same schema to the schema publication
+ALTER PUBLICATION testpub_forschema ADD SEQUENCE pub_test.testpub_seq1;
+-- fail - can't drop a sequence from the schema publication which isn't in the
+-- publication
+ALTER PUBLICATION testpub_forschema DROP SEQUENCE pub_test.testpub_seq1;
+-- should be able to set sequence to schema publication
+ALTER PUBLICATION testpub_forschema SET SEQUENCE pub_test.testpub_seq1;
+\dRp+ testpub_forschema
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+\d+ pub_test.testpub_seq1
+\dRp+ testpub_forallsequences
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1, testpub_seq2;
+DROP PUBLICATION testpub_forallsequences, testpub_forsequence, testpub_forschema;
+
+-- Publication mixing tables and sequences
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_mix;
+RESET client_min_messages;
+
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE pub_test.testpub_seq2;
+
+ALTER PUBLICATION testpub_mix ADD SEQUENCE testpub_seq1, TABLE testpub_tbl1;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix ADD ALL SEQUENCES IN SCHEMA pub_test, ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix DROP ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+DROP PUBLICATION testpub_mix;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE pub_test.testpub_seq2;
+
+
+-- make sure we replicate only the correct relation type
+CREATE SCHEMA pub_test1;
+CREATE SEQUENCE pub_test1.test_seq1;
+CREATE TABLE pub_test1.test_tbl1 (a int primary key, b int);
+
+CREATE SCHEMA pub_test2;
+CREATE SEQUENCE pub_test2.test_seq2;
+CREATE TABLE pub_test2.test_tbl2 (a int primary key, b int);
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_schemas;
+RESET client_min_messages;
+
+-- add tables from one schema, sequences from the other
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test1;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- add the other object type from each schema
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test1;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test2;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- now drop the object type added first
+ALTER PUBLICATION testpub_schemas DROP ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas DROP ALL SEQUENCES IN SCHEMA pub_test1;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- should fail (publication contains the whole schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test1.test_tbl1;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test2.test_seq2;
+
+-- should work (different schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test1.test_seq1;
+
+\dRp+ testpub_schemas
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- now drop the explicitly added objects again
+ALTER PUBLICATION testpub_schemas DROP TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas DROP SEQUENCE pub_test1.test_seq1;
+
+\dRp+ testpub_schemas
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+DROP PUBLICATION testpub_schemas;
+DROP TABLE pub_test1.test_tbl1, pub_test2.test_tbl2;
+DROP SEQUENCE pub_test1.test_seq1, pub_test2.test_seq2;
+DROP SCHEMA pub_test1, pub_test2;
+
-- Tests for partitioned tables
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_forparted;
@@ -717,32 +896,51 @@ CREATE SCHEMA sch1;
CREATE SCHEMA sch2;
CREATE TABLE sch1.tbl1 (a int) PARTITION BY RANGE(a);
CREATE TABLE sch2.tbl1_part1 PARTITION OF sch1.tbl1 FOR VALUES FROM (1) to (10);
+CREATE SEQUENCE sch1.seq1;
+CREATE SEQUENCE sch2.seq2;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
DROP TABLE sch2.tbl1_part1;
@@ -755,10 +953,36 @@ CREATE TABLE sch1.tbl1_part3 (a int) PARTITION BY RANGE(a);
ALTER TABLE sch1.tbl1 ATTACH PARTITION sch1.tbl1_part3 FOR VALUES FROM (20) to (30);
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Schema publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR ALL SEQUENCES IN SCHEMA sch2;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub DROP SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
RESET client_min_messages;
DROP PUBLICATION pub;
DROP TABLE sch1.tbl1;
+DROP SEQUENCE sch1.seq1, sch2.seq2;
DROP SCHEMA sch1 cascade;
DROP SCHEMA sch2 cascade;
diff --git a/src/test/subscription/t/030_sequences.pl b/src/test/subscription/t/030_sequences.pl
new file mode 100644
index 00000000000..9ae3c03d7d1
--- /dev/null
+++ b/src/test/subscription/t/030_sequences.pl
@@ -0,0 +1,202 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'initial test data replicated');
+
+
+# advance the sequence in a rolled-back transaction - the rollback
+# does not wait for the replication, so we could see any intermediate state
+# so do something else after the test, to ensure we wait for everything
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+ INSERT INTO seq_test VALUES (-1);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'advance sequence in rolled-back transaction');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'create new sequence and roll it back');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'create sequence, advance it in rolled-back transaction, but commit the create');
+
+
+# advance the new sequence in a transaction, and roll it back - the rollback
+# does not wait for the replication, so we could see any intermediate state
+# so do something else after the test, to ensure we wait for everything
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+ INSERT INTO seq_test VALUES (-1);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '231|0|t',
+ 'advance the new sequence in a transaction and roll it back');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'advance sequence in a subtransaction');
+
+
+done_testing();
--
2.34.1
On 20.03.22 23:55, Tomas Vondra wrote:
Attached is a rebased patch, addressing most of the remaining issues.
This looks okay to me, if the two FIXMEs are addressed. Remember to
also update protocol.sgml if you change LOGICAL_REP_MSG_SEQUENCE.
On 3/21/22 14:05, Peter Eisentraut wrote:
On 20.03.22 23:55, Tomas Vondra wrote:
Attached is a rebased patch, addressing most of the remaining issues.
This looks okay to me, if the two FIXMEs are addressed. Remember to
also update protocol.sgml if you change LOGICAL_REP_MSG_SEQUENCE.
Thanks. Do we want to use a different constant for the sequence message?
I've used 'X' for the WIP patch, but maybe there's a better value?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 21.03.22 22:54, Tomas Vondra wrote:
On 3/21/22 14:05, Peter Eisentraut wrote:
On 20.03.22 23:55, Tomas Vondra wrote:
Attached is a rebased patch, addressing most of the remaining issues.
This looks okay to me, if the two FIXMEs are addressed. Remember to
also update protocol.sgml if you change LOGICAL_REP_MSG_SEQUENCE.Thanks. Do we want to use a different constant for the sequence message?
I've used 'X' for the WIP patch, but maybe there's a better value?
I would do small 's'. Alternatively, 'Q'/'q' is still available, too.
On Mon, Mar 21, 2022 at 4:25 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
Hi,
Attached is a rebased patch, addressing most of the remaining issues.
It appears that on the apply side, the patch always creates a new
relfilenode irrespective of whether the sequence message is
transactional or not. Is it required to create a new relfilenode for
non-transactional messages? If not that could be costly?
--
With Regards,
Amit Kapila.
On 22. 3. 2022, at 13:09, Amit Kapila <amit.kapila16@gmail.com> wrote:
On Mon, Mar 21, 2022 at 4:25 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:Hi,
Attached is a rebased patch, addressing most of the remaining issues.
It appears that on the apply side, the patch always creates a new
relfilenode irrespective of whether the sequence message is
transactional or not. Is it required to create a new relfilenode for
non-transactional messages? If not that could be costly?
That's a good catch, I think we should just write the page in the non-transactional case, no need to mess with relnodes.
Petr
On Tue, Mar 22, 2022 at 5:41 PM Petr Jelinek
<petr.jelinek@enterprisedb.com> wrote:
On 22. 3. 2022, at 13:09, Amit Kapila <amit.kapila16@gmail.com> wrote:
On Mon, Mar 21, 2022 at 4:25 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:Hi,
Attached is a rebased patch, addressing most of the remaining issues.
It appears that on the apply side, the patch always creates a new
relfilenode irrespective of whether the sequence message is
transactional or not. Is it required to create a new relfilenode for
non-transactional messages? If not that could be costly?That's a good catch, I think we should just write the page in the non-transactional case, no need to mess with relnodes.
What if the current node has also incremented from the existing
sequence? Basically, how will we deal with conflicts? It seems we will
overwrite the actions done on the existing node which means sequence
values can go back.
On looking a bit more closely, I think I see some more fundamental
problems here:
* Don't we need some syncing mechanism between apply worker and
sequence sync worker so that apply worker skips the sequence changes
till the sync worker is finished, otherwise, there is a risk of one
overriding the values of the other?
* Currently, the patch uses one sync worker per sequence. It seems to
be a waste of resources considering apart from one additional process,
we need origin/slot to sync each sequence.
* Don't we need explicit privilege checking before applying sequence
data as we do in commit a2ab9c06ea15fbcb2bfde570986a06b37f52bcca for
tables?
--
With Regards,
Amit Kapila.
On 23. 3. 2022, at 12:50, Amit Kapila <amit.kapila16@gmail.com> wrote:
On Tue, Mar 22, 2022 at 5:41 PM Petr Jelinek
<petr.jelinek@enterprisedb.com <mailto:petr.jelinek@enterprisedb.com>> wrote:On 22. 3. 2022, at 13:09, Amit Kapila <amit.kapila16@gmail.com> wrote:
On Mon, Mar 21, 2022 at 4:25 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:Hi,
Attached is a rebased patch, addressing most of the remaining issues.
It appears that on the apply side, the patch always creates a new
relfilenode irrespective of whether the sequence message is
transactional or not. Is it required to create a new relfilenode for
non-transactional messages? If not that could be costly?That's a good catch, I think we should just write the page in the non-transactional case, no need to mess with relnodes.
What if the current node has also incremented from the existing
sequence? Basically, how will we deal with conflicts? It seems we will
overwrite the actions done on the existing node which means sequence
values can go back.
I think this is perfectly acceptable behavior, we are replicating state from upstream, not reconciling state on downstream.
You can't really use the builtin sequences to implement distributed sequence via replication. If user wants to write to both nodes they should not replicate the sequence value and instead offset the sequence on each node so they produce different ranges, that's quite common approach. One day we might want revisit adding support for custom sequence AMs.
* Currently, the patch uses one sync worker per sequence. It seems to
be a waste of resources considering apart from one additional process,
we need origin/slot to sync each sequence.
This is indeed wasteful but not something that I'd consider blocker for the patch personally.
--
Petr
On 3/23/22 13:46, Petr Jelinek wrote:
On 23. 3. 2022, at 12:50, Amit Kapila <amit.kapila16@gmail.com
<mailto:amit.kapila16@gmail.com>> wrote:On Tue, Mar 22, 2022 at 5:41 PM Petr Jelinek
<petr.jelinek@enterprisedb.com <mailto:petr.jelinek@enterprisedb.com>>
wrote:On 22. 3. 2022, at 13:09, Amit Kapila <amit.kapila16@gmail.com
<mailto:amit.kapila16@gmail.com>> wrote:On Mon, Mar 21, 2022 at 4:25 AM Tomas Vondra
<tomas.vondra@enterprisedb.com
<mailto:tomas.vondra@enterprisedb.com>> wrote:Hi,
Attached is a rebased patch, addressing most of the remaining issues.
It appears that on the apply side, the patch always creates a new
relfilenode irrespective of whether the sequence message is
transactional or not. Is it required to create a new relfilenode for
non-transactional messages? If not that could be costly?That's a good catch, I think we should just write the page in the
non-transactional case, no need to mess with relnodes.What if the current node has also incremented from the existing
sequence? Basically, how will we deal with conflicts? It seems we will
overwrite the actions done on the existing node which means sequence
values can go back.I think this is perfectly acceptable behavior, we are replicating state
from upstream, not reconciling state on downstream.You can't really use the builtin sequences to implement distributed
sequence via replication. If user wants to write to both nodes they
should not replicate the sequence value and instead offset the sequence
on each node so they produce different ranges, that's quite common
approach. One day we might want revisit adding support for custom
sequence AMs.
Exactly. Moreover it's about the same behavior as if you update table
data on the subscriber, and then an UPDATE gets replicated and
overwrites the local change.
Attached is a patch fixing the relfilenode issue - now we only allocate
a new relfilenode for the transactional case, and an in-place update
similar to a setval() otherwise. And thanks for noticing this.
* Currently, the patch uses one sync worker per sequence. It seems to
be a waste of resources considering apart from one additional process,
we need origin/slot to sync each sequence.This is indeed wasteful but not something that I'd consider blocker for
the patch personally.
Right, and the same argument can be made for tablesync of tiny tables
(which a sequence essentially is). I'm sure there are ways to improve
this, but that can be done later.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-Add-support-for-decoding-sequences-to-built-20220323.patchtext/x-patch; charset=UTF-8; name=0001-Add-support-for-decoding-sequences-to-built-20220323.patchDownload
From a53fc6e4a0b132587a56f43750bf9f6cfc01ed6c Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Sat, 19 Mar 2022 20:15:06 +0100
Subject: [PATCH] Add support for decoding sequences to built-in replication
---
doc/src/sgml/catalogs.sgml | 81 ++
doc/src/sgml/protocol.sgml | 119 ++
doc/src/sgml/ref/alter_publication.sgml | 24 +-
doc/src/sgml/ref/alter_subscription.sgml | 3 +-
doc/src/sgml/ref/create_publication.sgml | 45 +-
src/backend/catalog/objectaddress.c | 44 +-
src/backend/catalog/pg_publication.c | 321 +++++-
src/backend/catalog/system_views.sql | 10 +
src/backend/commands/publicationcmds.c | 426 +++++--
src/backend/commands/sequence.c | 154 +++
src/backend/commands/subscriptioncmds.c | 101 +-
src/backend/commands/tablecmds.c | 27 +-
src/backend/executor/execReplication.c | 4 +-
src/backend/nodes/copyfuncs.c | 4 +-
src/backend/nodes/equalfuncs.c | 4 +-
src/backend/parser/gram.y | 52 +-
src/backend/replication/logical/proto.c | 52 +
src/backend/replication/logical/tablesync.c | 111 +-
src/backend/replication/logical/worker.c | 56 +
src/backend/replication/pgoutput/pgoutput.c | 79 +-
src/backend/utils/cache/relcache.c | 28 +-
src/backend/utils/cache/syscache.c | 6 +-
src/bin/pg_dump/pg_dump.c | 65 +-
src/bin/pg_dump/pg_dump.h | 3 +
src/bin/pg_dump/t/002_pg_dump.pl | 40 +-
src/bin/psql/describe.c | 287 +++--
src/bin/psql/tab-complete.c | 12 +-
src/include/catalog/pg_proc.dat | 5 +
src/include/catalog/pg_publication.h | 26 +-
.../catalog/pg_publication_namespace.h | 10 +-
src/include/commands/sequence.h | 1 +
src/include/nodes/parsenodes.h | 8 +-
src/include/replication/logicalproto.h | 19 +
src/include/replication/pgoutput.h | 1 +
src/test/regress/expected/object_address.out | 10 +-
src/test/regress/expected/publication.out | 1009 ++++++++++++++---
src/test/regress/expected/rules.out | 8 +
src/test/regress/sql/object_address.sql | 5 +-
src/test/regress/sql/publication.sql | 226 +++-
src/test/subscription/t/030_sequences.pl | 202 ++++
40 files changed, 3231 insertions(+), 457 deletions(-)
create mode 100644 src/test/subscription/t/030_sequences.pl
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 2a8cd026649..b8c954a5547 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6281,6 +6281,16 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
Reference to schema
</para></entry>
</row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pntype</structfield> <type>char</type>
+ Determines which object type is included from this schema.
+ </para>
+ <para>
+ Reference to schema
+ </para></entry>
+ </row>
</tbody>
</tgroup>
</table>
@@ -9598,6 +9608,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11433,6 +11448,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index 719b947ef4e..c61c310e176 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -7065,6 +7065,125 @@ Relation
</listitem>
</varlistentry>
+<varlistentry id="protocol-logicalrep-message-formats-Sequence">
+<term>
+Sequence
+</term>
+<listitem>
+<para>
+
+<variablelist>
+<varlistentry>
+<term>
+ Byte1('X')
+</term>
+<listitem>
+<para>
+ Identifies the message as a sequence message.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ Int32 (TransactionId)
+</term>
+<listitem>
+<para>
+ Xid of the transaction (only present for streamed transactions).
+ This field is available since protocol version 2.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ Int8(0)
+</term>
+<listitem>
+<para>
+ Flags; currently unused.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ Int64 (XLogRecPtr)
+</term>
+<listitem>
+<para>
+ The LSN of the sequence increment.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ String
+</term>
+<listitem>
+<para>
+ Namespace (empty string for <literal>pg_catalog</literal>).
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ String
+</term>
+<listitem>
+<para>
+ Relation name.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int8
+</term>
+<listitem>
+<para>
+ 1 if the sequence update is transactions, 0 otherwise.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int64
+</term>
+<listitem>
+<para>
+ <structfield>last_value</structfield> value of the sequence.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int64
+</term>
+<listitem>
+<para>
+ <structfield>log_cnt</structfield> value of the sequence.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int8
+</term>
+<listitem>
+<para>
+ <structfield>is_called</structfield> value of the sequence.
+</para>
+</listitem>
+</varlistentry>
+
+</variablelist>
+</para>
+</listitem>
+</varlistentry>
+
<varlistentry id="protocol-logicalrep-message-formats-Type">
<term>
Type
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index 32b75f6c78e..5dacb732b69 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -58,7 +60,18 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The fourth variant of this command listed in the synopsis can change
+ The next three variants change which sequences are part of the publication.
+ The <literal>SET SEQUENCE</literal> clause will replace the list of sequences
+ in the publication with the specified one. The <literal>ADD SEQUENCE</literal>
+ and <literal>DROP SEQUENCE</literal> clauses will add and remove one or more
+ sequences from the publication. Note that adding sequences to a publication
+ that is already subscribed to will require a <literal>ALTER SUBSCRIPTION
+ ... REFRESH PUBLICATION</literal> action on the subscribing side in order
+ to become effective.
+ </para>
+
+ <para>
+ The seventh variant of this command listed in the synopsis can change
all of the publication properties specified in
<xref linkend="sql-createpublication"/>. Properties not mentioned in the
command retain their previous settings.
@@ -122,6 +135,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><replaceable class="parameter">schema_name</replaceable></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index ac2db249cbb..70c5c69be97 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -148,7 +148,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
+ replication of tables and sequences that were added to the subscribed-to publications
since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -169,6 +169,7 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<para>
Previously subscribed tables are not copied, even if a table's row
filter <literal>WHERE</literal> clause has since been modified.
+ Previously-subscribed sequences are not copied either.
</para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index 4979b9b646d..f355549966d 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -23,13 +23,16 @@ PostgreSQL documentation
<synopsis>
CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
[ FOR ALL TABLES
+ | FOR ALL SEQUENCES
| FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
[ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -107,27 +110,43 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>FOR SEQUENCE</literal></term>
+ <listitem>
+ <para>
+ Specifies a list of sequences to add to the publication.
+ </para>
+
+ <para>
+ Specifying a sequence that is part of a schema specified by <literal>FOR
+ ALL SEQUENCES IN SCHEMA</literal> is not supported.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>FOR ALL TABLES</literal></term>
+ <term><literal>FOR ALL SEQUENCES</literal></term>
<listitem>
<para>
- Marks the publication as one that replicates changes for all tables in
- the database, including tables created in the future.
+ Marks the publication as one that replicates changes for all tables/sequences in
+ the database, including tables/sequences created in the future.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>FOR ALL TABLES IN SCHEMA</literal></term>
+ <term><literal>FOR ALL SEQUENCES IN SCHEMA</literal></term>
<listitem>
<para>
- Marks the publication as one that replicates changes for all tables in
- the specified list of schemas, including tables created in the future.
+ Marks the publication as one that replicates changes for all sequences/tables in
+ the specified list of schemas, including sequences/tables created in the future.
</para>
<para>
- Specifying a schema along with a table which belongs to the specified
- schema using <literal>FOR TABLE</literal> is not supported.
+ Specifying a schema along with a sequence/table which belongs to the specified
+ schema using <literal>FOR SEQUENCE</literal>/<literal>FOR TABLE</literal> is not supported.
</para>
<para>
@@ -202,10 +221,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
<title>Notes</title>
<para>
- If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
- <literal>FOR ALL TABLES IN SCHEMA</literal> are not specified, then the
- publication starts out with an empty set of tables. That is useful if
- tables or schemas are to be added later.
+ If <literal>FOR TABLE</literal>, <literal>FOR SEQUENCE</literal>, etc. is
+ not specified, then the publication starts out with an empty set of tables
+ and sequences. That is useful if objects are to be added later.
</para>
<para>
@@ -220,10 +238,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
</para>
<para>
- To add a table to a publication, the invoking user must have ownership
- rights on the table. The <command>FOR ALL TABLES</command> and
- <command>FOR ALL TABLES IN SCHEMA</command> clauses require the invoking
- user to be a superuser.
+ To add a table or a sequence to a publication, the invoking user must
+ have ownership rights on the object. The <command>FOR ALL ...</command>
+ clauses require the invoking user to be a superuser.
</para>
<para>
diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c
index d7ce063997a..3fd17ea64f0 100644
--- a/src/backend/catalog/objectaddress.c
+++ b/src/backend/catalog/objectaddress.c
@@ -1930,12 +1930,14 @@ get_object_address_publication_schema(List *object, bool missing_ok)
char *pubname;
char *schemaname;
Oid schemaid;
+ char *objtype;
ObjectAddressSet(address, PublicationNamespaceRelationId, InvalidOid);
/* Fetch schema name and publication name from input list */
schemaname = strVal(linitial(object));
pubname = strVal(lsecond(object));
+ objtype = strVal(lthird(object));
schemaid = get_namespace_oid(schemaname, missing_ok);
if (!OidIsValid(schemaid))
@@ -1948,10 +1950,12 @@ get_object_address_publication_schema(List *object, bool missing_ok)
/* Find the publication schema mapping in syscache */
address.objectId =
- GetSysCacheOid2(PUBLICATIONNAMESPACEMAP,
+ GetSysCacheOid3(PUBLICATIONNAMESPACEMAP,
Anum_pg_publication_namespace_oid,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pub->oid));
+ ObjectIdGetDatum(pub->oid),
+ CharGetDatum(objtype[0]));
+
if (!OidIsValid(address.objectId) && !missing_ok)
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
@@ -2232,7 +2236,6 @@ pg_get_object_address(PG_FUNCTION_ARGS)
case OBJECT_DOMCONSTRAINT:
case OBJECT_CAST:
case OBJECT_USER_MAPPING:
- case OBJECT_PUBLICATION_NAMESPACE:
case OBJECT_PUBLICATION_REL:
case OBJECT_DEFACL:
case OBJECT_TRANSFORM:
@@ -2257,6 +2260,7 @@ pg_get_object_address(PG_FUNCTION_ARGS)
/* fall through to check args length */
/* FALLTHROUGH */
case OBJECT_OPERATOR:
+ case OBJECT_PUBLICATION_NAMESPACE:
if (list_length(args) != 2)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -2327,6 +2331,8 @@ pg_get_object_address(PG_FUNCTION_ARGS)
objnode = (Node *) list_make2(name, linitial(args));
break;
case OBJECT_PUBLICATION_NAMESPACE:
+ objnode = (Node *) list_make3(linitial(name), linitial(args), lsecond(args));
+ break;
case OBJECT_USER_MAPPING:
objnode = (Node *) list_make2(linitial(name), linitial(args));
break;
@@ -2881,11 +2887,12 @@ get_catalog_object_by_oid(Relation catalog, AttrNumber oidcol, Oid objectId)
*
* Get publication name and schema name from the object address into pubname and
* nspname. Both pubname and nspname are palloc'd strings which will be freed by
- * the caller.
+ * the caller. The last parameter specifies which object type is included from
+ * the schema.
*/
static bool
getPublicationSchemaInfo(const ObjectAddress *object, bool missing_ok,
- char **pubname, char **nspname)
+ char **pubname, char **nspname, char **objtype)
{
HeapTuple tup;
Form_pg_publication_namespace pnform;
@@ -2921,6 +2928,13 @@ getPublicationSchemaInfo(const ObjectAddress *object, bool missing_ok,
return false;
}
+ /*
+ * The type is always a single character, but we need to pass it as a string,
+ * so allocate two charaters and set the first one. The second one is \0.
+ */
+ *objtype = palloc0(2);
+ *objtype[0] = pnform->pntype;
+
ReleaseSysCache(tup);
return true;
}
@@ -3926,15 +3940,17 @@ getObjectDescription(const ObjectAddress *object, bool missing_ok)
{
char *pubname;
char *nspname;
+ char *objtype;
if (!getPublicationSchemaInfo(object, missing_ok,
- &pubname, &nspname))
+ &pubname, &nspname, &objtype))
break;
- appendStringInfo(&buffer, _("publication of schema %s in publication %s"),
- nspname, pubname);
+ appendStringInfo(&buffer, _("publication of schema %s in publication %s type %s"),
+ nspname, pubname, objtype);
pfree(pubname);
pfree(nspname);
+ pfree(objtype);
break;
}
@@ -5729,18 +5745,24 @@ getObjectIdentityParts(const ObjectAddress *object,
{
char *pubname;
char *nspname;
+ char *objtype;
if (!getPublicationSchemaInfo(object, missing_ok, &pubname,
- &nspname))
+ &nspname, &objtype))
break;
- appendStringInfo(&buffer, "%s in publication %s",
- nspname, pubname);
+ appendStringInfo(&buffer, "%s in publication %s type %s",
+ nspname, pubname, objtype);
if (objargs)
*objargs = list_make1(pubname);
else
pfree(pubname);
+ if (objargs)
+ *objargs = lappend(*objargs, objtype);
+ else
+ pfree(objtype);
+
if (objname)
*objname = list_make1(nspname);
else
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 789b895db89..144d1ada9e2 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -54,7 +54,8 @@ check_publication_add_relation(Relation targetrel)
{
/* Must be a regular or partitioned table */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -131,7 +132,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -176,6 +178,47 @@ filter_partitions(List *relids)
return result;
}
+/* Check the character is a valid object type for schema publication. */
+static void
+AssertObjectTypeValid(char objectType)
+{
+#ifdef USE_ASSERT_CHECKING
+ Assert(objectType == PUB_OBJTYPE_SEQUENCE || objectType == PUB_OBJTYPE_TABLE);
+#endif
+}
+
+/*
+ * Determine object type given the object type set for a schema.
+ */
+char
+pub_get_object_type_for_relkind(char relkind)
+{
+ /* sequence maps directly to sequence relkind */
+ if (relkind == RELKIND_SEQUENCE)
+ return PUB_OBJTYPE_SEQUENCE;
+
+ /* for table, we match either regular or partitioned table */
+ if (relkind == RELKIND_RELATION ||
+ relkind == RELKIND_PARTITIONED_TABLE)
+ return PUB_OBJTYPE_TABLE;
+
+ return PUB_OBJTYPE_UNSUPPORTED;
+}
+
+/*
+ * Determine if publication object type matches the relkind.
+ *
+ * Returns true if the relation matches object type replicated by this schema,
+ * false otherwise.
+ */
+static bool
+pub_object_type_matches_relkind(char objectType, char relkind)
+{
+ AssertObjectTypeValid(objectType);
+
+ return (pub_get_object_type_for_relkind(relkind) == objectType);
+}
+
/*
* Another variant of this, taking a Relation.
*/
@@ -205,7 +248,7 @@ is_schema_publication(Oid pubid)
ObjectIdGetDatum(pubid));
scan = systable_beginscan(pubschsrel,
- PublicationNamespacePnnspidPnpubidIndexId,
+ PublicationNamespacePnnspidPnpubidPntypeIndexId,
true, NULL, 1, &scankey);
tup = systable_getnext(scan);
result = HeapTupleIsValid(tup);
@@ -313,7 +356,9 @@ GetTopMostAncestorInPublication(Oid puboid, List *ancestors, int *ancestor_level
}
else
{
- aschemaPubids = GetSchemaPublications(get_rel_namespace(ancestor));
+ /* we only search for ancestors of tables, so PUB_OBJTYPE_TABLE */
+ aschemaPubids = GetSchemaPublications(get_rel_namespace(ancestor),
+ PUB_OBJTYPE_TABLE);
if (list_member_oid(aschemaPubids, puboid))
{
topmost_relid = ancestor;
@@ -436,7 +481,7 @@ publication_add_relation(Oid pubid, PublicationRelInfo *pri,
* Insert new publication / schema mapping.
*/
ObjectAddress
-publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
+publication_add_schema(Oid pubid, Oid schemaid, char objectType, bool if_not_exists)
{
Relation rel;
HeapTuple tup;
@@ -448,6 +493,8 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
ObjectAddress myself,
referenced;
+ AssertObjectTypeValid(objectType);
+
rel = table_open(PublicationNamespaceRelationId, RowExclusiveLock);
/*
@@ -455,9 +502,10 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
* duplicates, it's here just to provide nicer error message in common
* case. The real protection is the unique key on the catalog.
*/
- if (SearchSysCacheExists2(PUBLICATIONNAMESPACEMAP,
+ if (SearchSysCacheExists3(PUBLICATIONNAMESPACEMAP,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pubid)))
+ ObjectIdGetDatum(pubid),
+ CharGetDatum(objectType)))
{
table_close(rel, RowExclusiveLock);
@@ -483,6 +531,8 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
ObjectIdGetDatum(pubid);
values[Anum_pg_publication_namespace_pnnspid - 1] =
ObjectIdGetDatum(schemaid);
+ values[Anum_pg_publication_namespace_pntype - 1] =
+ CharGetDatum(objectType);
tup = heap_form_tuple(RelationGetDescr(rel), values, nulls);
@@ -508,7 +558,7 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
* publication_add_relation for why we need to consider all the
* partitions.
*/
- schemaRels = GetSchemaPublicationRelations(schemaid,
+ schemaRels = GetSchemaPublicationRelations(schemaid, objectType,
PUBLICATION_PART_ALL);
InvalidatePublicationRels(schemaRels);
@@ -542,11 +592,14 @@ GetRelationPublications(Oid relid)
/*
* Gets list of relation oids for a publication.
*
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE / FOR SEQUENCE publications, the FOR
+ * ALL TABLES / SEQUENCES should use GetAllTablesPublicationRelations()
+ * and GetAllSequencesPublicationRelations().
+ *
+ * XXX pub_partopt only matters for tables, not sequences.
*/
List *
-GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
+GetPublicationRelations(Oid pubid, char objectType, PublicationPartOpt pub_partopt)
{
List *result;
Relation pubrelsrel;
@@ -554,6 +607,8 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
SysScanDesc scan;
HeapTuple tup;
+ AssertObjectTypeValid(objectType);
+
/* Find all publications associated with the relation. */
pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
@@ -568,11 +623,29 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
result = NIL;
while (HeapTupleIsValid(tup = systable_getnext(scan)))
{
+ char relkind;
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
- result = GetPubPartitionOptionRelations(result, pub_partopt,
- pubrel->prrelid);
+ relkind = get_rel_relkind(pubrel->prrelid);
+
+ /*
+ * If the relkind does not match the requested object type, ignore the
+ * relation. For example we might be interested only in sequences, so
+ * we ignore tables.
+ */
+ if (!pub_object_type_matches_relkind(objectType, relkind))
+ continue;
+
+ /*
+ * We don't have partitioned sequences, so just add them to the list.
+ * Otherwise consider adding all child relations, if requested.
+ */
+ if (relkind == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ else
+ result = GetPubPartitionOptionRelations(result, pub_partopt,
+ pubrel->prrelid);
}
systable_endscan(scan);
@@ -622,6 +695,43 @@ GetAllTablesPublications(void)
return result;
}
+/*
+ * Gets list of publication oids for publications marked as FOR ALL SEQUENCES.
+ */
+List *
+GetAllSequencesPublications(void)
+{
+ List *result;
+ Relation rel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications that are marked as for all sequences. */
+ rel = table_open(PublicationRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_puballsequences,
+ BTEqualStrategyNumber, F_BOOLEQ,
+ BoolGetDatum(true));
+
+ scan = systable_beginscan(rel, InvalidOid, false,
+ NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Oid oid = ((Form_pg_publication) GETSTRUCT(tup))->oid;
+
+ result = lappend_oid(result, oid);
+ }
+
+ systable_endscan(scan);
+ table_close(rel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of all relation published by FOR ALL TABLES publication(s).
*
@@ -688,28 +798,38 @@ GetAllTablesPublicationRelations(bool pubviaroot)
/*
* Gets the list of schema oids for a publication.
*
- * This should only be used FOR ALL TABLES IN SCHEMA publications.
+ * This should only be used FOR ALL TABLES IN SCHEMA and FOR ALL SEQUENCES
+ * publications.
+ *
+ * 'objectType' determines whether to get FOR TABLE or FOR SEQUENCES schemas
*/
List *
-GetPublicationSchemas(Oid pubid)
+GetPublicationSchemas(Oid pubid, char objectType)
{
List *result = NIL;
Relation pubschsrel;
- ScanKeyData scankey;
+ ScanKeyData scankey[2];
SysScanDesc scan;
HeapTuple tup;
+ AssertObjectTypeValid(objectType);
+
/* Find all schemas associated with the publication */
pubschsrel = table_open(PublicationNamespaceRelationId, AccessShareLock);
- ScanKeyInit(&scankey,
+ ScanKeyInit(&scankey[0],
Anum_pg_publication_namespace_pnpubid,
BTEqualStrategyNumber, F_OIDEQ,
ObjectIdGetDatum(pubid));
+ ScanKeyInit(&scankey[1],
+ Anum_pg_publication_namespace_pntype,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(objectType));
+
scan = systable_beginscan(pubschsrel,
- PublicationNamespacePnnspidPnpubidIndexId,
- true, NULL, 1, &scankey);
+ PublicationNamespacePnnspidPnpubidPntypeIndexId,
+ true, NULL, 2, scankey);
while (HeapTupleIsValid(tup = systable_getnext(scan)))
{
Form_pg_publication_namespace pubsch;
@@ -727,14 +847,26 @@ GetPublicationSchemas(Oid pubid)
/*
* Gets the list of publication oids associated with a specified schema.
+ *
+ * objectType specifies whether we're looking for schemas including tables or
+ * sequences.
+ *
+ * Note: relcache calls this for all object types, not just tables and sequences.
+ * Which is why we handle the PUB_OBJTYPE_UNSUPPORTED object type too.
*/
List *
-GetSchemaPublications(Oid schemaid)
+GetSchemaPublications(Oid schemaid, char objectType)
{
List *result = NIL;
CatCList *pubschlist;
int i;
+ /* unsupported object type */
+ if (objectType == PUB_OBJTYPE_UNSUPPORTED)
+ return result;
+
+ AssertObjectTypeValid(objectType);
+
/* Find all publications associated with the schema */
pubschlist = SearchSysCacheList1(PUBLICATIONNAMESPACEMAP,
ObjectIdGetDatum(schemaid));
@@ -742,6 +874,11 @@ GetSchemaPublications(Oid schemaid)
{
HeapTuple tup = &pubschlist->members[i]->tuple;
Oid pubid = ((Form_pg_publication_namespace) GETSTRUCT(tup))->pnpubid;
+ char pntype = ((Form_pg_publication_namespace) GETSTRUCT(tup))->pntype;
+
+ /* Skip schemas publishing a different object type. */
+ if (pntype != objectType)
+ continue;
result = lappend_oid(result, pubid);
}
@@ -753,9 +890,13 @@ GetSchemaPublications(Oid schemaid)
/*
* Get the list of publishable relation oids for a specified schema.
+ *
+ * objectType specifies whether this is FOR ALL TABLES IN SCHEMA or FOR ALL
+ * SEQUENCES IN SCHEMA
*/
List *
-GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
+GetSchemaPublicationRelations(Oid schemaid, char objectType,
+ PublicationPartOpt pub_partopt)
{
Relation classRel;
ScanKeyData key[1];
@@ -764,6 +905,7 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
List *result = NIL;
Assert(OidIsValid(schemaid));
+ AssertObjectTypeValid(objectType);
classRel = table_open(RelationRelationId, AccessShareLock);
@@ -784,9 +926,16 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
continue;
relkind = get_rel_relkind(relid);
- if (relkind == RELKIND_RELATION)
- result = lappend_oid(result, relid);
- else if (relkind == RELKIND_PARTITIONED_TABLE)
+
+ /* Skip if the relkind does not match FOR ALL TABLES / SEQUENCES. */
+ if (!pub_object_type_matches_relkind(objectType, relkind))
+ continue;
+
+ /*
+ * If the object is a partitioned table, lookup all the child relations
+ * (if requested). Otherwise just add the object to the list.
+ */
+ if (relkind == RELKIND_PARTITIONED_TABLE)
{
List *partitionrels = NIL;
@@ -799,7 +948,11 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
pub_partopt,
relForm->oid);
result = list_concat_unique_oid(result, partitionrels);
+ continue;
}
+
+ /* non-partitioned tables and sequences */
+ result = lappend_oid(result, relid);
}
table_endscan(scan);
@@ -809,27 +962,67 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
/*
* Gets the list of all relations published by FOR ALL TABLES IN SCHEMA
- * publication.
+ * or FOR ALL SEQUENCES IN SCHEMA publication.
*/
List *
-GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
+GetAllSchemaPublicationRelations(Oid pubid, char objectType,
+ PublicationPartOpt pub_partopt)
{
List *result = NIL;
- List *pubschemalist = GetPublicationSchemas(pubid);
+ List *pubschemalist = GetPublicationSchemas(pubid, objectType);
ListCell *cell;
+ AssertObjectTypeValid(objectType);
+
foreach(cell, pubschemalist)
{
Oid schemaid = lfirst_oid(cell);
List *schemaRels = NIL;
- schemaRels = GetSchemaPublicationRelations(schemaid, pub_partopt);
+ schemaRels = GetSchemaPublicationRelations(schemaid, objectType,
+ pub_partopt);
result = list_concat(result, schemaRels);
}
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -852,10 +1045,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -966,10 +1161,12 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
*schemarelids;
relids = GetPublicationRelations(publication->oid,
+ PUB_OBJTYPE_TABLE,
publication->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
schemarelids = GetAllSchemaPublicationRelations(publication->oid,
+ PUB_OBJTYPE_TABLE,
publication->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
@@ -1005,3 +1202,71 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ {
+ List *relids,
+ *schemarelids;
+
+ relids = GetPublicationRelations(publication->oid,
+ PUB_OBJTYPE_SEQUENCE,
+ publication->pubviaroot ?
+ PUBLICATION_PART_ROOT :
+ PUBLICATION_PART_LEAF);
+ schemarelids = GetAllSchemaPublicationRelations(publication->oid,
+ PUB_OBJTYPE_SEQUENCE,
+ publication->pubviaroot ?
+ PUBLICATION_PART_ROOT :
+ PUBLICATION_PART_LEAF);
+ sequences = list_concat_unique_oid(relids, schemarelids);
+ }
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index bd48ee7bd25..9ac8e9a2998 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -374,6 +374,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPS,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPS.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 1aad2e769cb..92505f4699f 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -67,15 +68,17 @@ typedef struct rf_context
} rf_context;
static List *OpenRelIdList(List *relids);
-static List *OpenTableList(List *tables);
-static void CloseTableList(List *rels);
+static List *OpenRelationList(List *rels, char objectType);
+static void CloseRelationList(List *rels);
static void LockSchemaList(List *schemalist);
-static void PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
+static void PublicationAddRelations(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt);
-static void PublicationDropTables(Oid pubid, List *rels, bool missing_ok);
-static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
- AlterPublicationStmt *stmt);
-static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static void PublicationDropRelations(Oid pubid, List *rels, bool missing_ok);
+static void PublicationAddSchemas(Oid pubid, List *schemas, char objectType,
+ bool if_not_exists, AlterPublicationStmt *stmt);
+static void PublicationDropSchemas(Oid pubid, List *schemas, char objectType,
+ bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
@@ -95,6 +98,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -119,6 +123,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -141,6 +146,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -167,7 +174,9 @@ parse_publication_options(ParseState *pstate,
*/
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas,
+ List **schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -185,12 +194,23 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLES_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA:
+ schemaid = get_namespace_oid(pubobj->name, false);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA:
@@ -204,6 +224,21 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
list_free(search_path);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA:
+ search_path = fetch_search_path(false);
+ if (search_path == NIL) /* nothing valid in search_path? */
+ ereport(ERROR,
+ errcode(ERRCODE_UNDEFINED_SCHEMA),
+ errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+ schemaid = linitial_oid(search_path);
+ list_free(search_path);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
*schemas = list_append_unique_oid(*schemas, schemaid);
break;
default:
@@ -240,6 +275,14 @@ CheckObjSchemaNotAlreadyInPublication(List *rels, List *schemaidlist,
errdetail("Table \"%s\" in schema \"%s\" is already part of the publication, adding the same schema is not supported.",
RelationGetRelationName(rel),
get_namespace_name(relSchemaId)));
+ else if (checkobjtype == PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA)
+ ereport(ERROR,
+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("cannot add schema \"%s\" to publication",
+ get_namespace_name(relSchemaId)),
+ errdetail("SEquence \"%s\" in schema \"%s\" is already part of the publication, adding the same schema is not supported.",
+ RelationGetRelationName(rel),
+ get_namespace_name(relSchemaId)));
else if (checkobjtype == PUBLICATIONOBJ_TABLE)
ereport(ERROR,
errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -248,6 +291,14 @@ CheckObjSchemaNotAlreadyInPublication(List *rels, List *schemaidlist,
RelationGetRelationName(rel)),
errdetail("Table's schema \"%s\" is already part of the publication or part of the specified schema list.",
get_namespace_name(relSchemaId)));
+ else if (checkobjtype == PUBLICATIONOBJ_SEQUENCE)
+ ereport(ERROR,
+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("cannot add relation \"%s.%s\" to publication",
+ get_namespace_name(relSchemaId),
+ RelationGetRelationName(rel)),
+ errdetail("Sequence's schema \"%s\" is already part of the publication or part of the specified schema list.",
+ get_namespace_name(relSchemaId)));
}
}
}
@@ -615,6 +666,7 @@ TransformPubWhereClauses(List *tables, const char *queryString,
ObjectAddress
CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
{
+ ListCell *lc;
Relation rel;
ObjectAddress myself;
Oid puboid;
@@ -626,9 +678,31 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
bool publish_via_partition_root_given;
bool publish_via_partition_root;
AclResult aclresult;
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
+ bool for_all_tables = false;
+ bool for_all_sequences = false;
+
+ /*
+ * XXX Translate the list of object types (represented by strings) to bool
+ * flags. Not sure this is the best way, maybe it should be done in the
+ * grammar directly, somehow.
+ *
+ * XXX Should prevent duplicates, like FOR ALL TABLES, TABLES, TABLES?
+ */
+ foreach (lc, stmt->for_all_objects)
+ {
+ char *val = strVal(lfirst(lc));
+ if (strcmp(val, "tables") == 0)
+ for_all_tables = true;
+ else if (strcmp(val, "sequences") == 0)
+ for_all_sequences = true;
+ }
+
/* must have CREATE privilege on database */
aclresult = pg_database_aclcheck(MyDatabaseId, GetUserId(), ACL_CREATE);
if (aclresult != ACLCHECK_OK)
@@ -636,7 +710,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
get_database_name(MyDatabaseId));
/* FOR ALL TABLES requires superuser */
- if (stmt->for_all_tables && !superuser())
+ if (for_all_tables && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES publication")));
@@ -672,7 +746,9 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
Anum_pg_publication_oid);
values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
values[Anum_pg_publication_puballtables - 1] =
- BoolGetDatum(stmt->for_all_tables);
+ BoolGetDatum(for_all_tables);
+ values[Anum_pg_publication_puballsequences - 1] =
+ BoolGetDatum(for_all_sequences);
values[Anum_pg_publication_pubinsert - 1] =
BoolGetDatum(pubactions.pubinsert);
values[Anum_pg_publication_pubupdate - 1] =
@@ -681,6 +757,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -698,45 +776,84 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
CommandCounterIncrement();
/* Associate objects with the publication. */
- if (stmt->for_all_tables)
+ if (for_all_tables || for_all_sequences)
{
/* Invalidate relcache so that publication info is rebuilt. */
CacheInvalidateRelcacheAll();
}
- else
+
+ if (!for_all_tables || !for_all_sequences)
{
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
- if (list_length(schemaidlist) > 0 && !superuser())
+ if (list_length(tables_schemaidlist) > 0 && !superuser())
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES IN SCHEMA publication"));
- if (list_length(relations) > 0)
+ /* FOR ALL SEQUENCES IN SCHEMA requires superuser */
+ if (list_length(sequences_schemaidlist) > 0 && !superuser())
+ ereport(ERROR,
+ errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("must be superuser to create FOR ALL SEQUENCES IN SCHEMA publication"));
+
+ /* tables added directly */
+ if (list_length(tables) > 0)
{
List *rels;
- rels = OpenTableList(relations);
- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ rels = OpenRelationList(tables, PUB_OBJTYPE_TABLE);
+ CheckObjSchemaNotAlreadyInPublication(rels, tables_schemaidlist,
PUBLICATIONOBJ_TABLE);
TransformPubWhereClauses(rels, pstate->p_sourcetext,
publish_via_partition_root);
- PublicationAddTables(puboid, rels, true, NULL);
- CloseTableList(rels);
+ PublicationAddRelations(puboid, rels, true, NULL);
+ CloseRelationList(rels);
}
- if (list_length(schemaidlist) > 0)
+ /* sequences added directly */
+ if (list_length(sequences) > 0)
+ {
+ List *rels;
+
+ rels = OpenRelationList(sequences, PUB_OBJTYPE_SEQUENCE);
+ CheckObjSchemaNotAlreadyInPublication(rels, sequences_schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddRelations(puboid, rels, true, NULL);
+ CloseRelationList(rels);
+ }
+
+ /* tables added through a schema */
+ if (list_length(tables_schemaidlist) > 0)
{
/*
* Schema lock is held until the publication is created to prevent
* concurrent schema deletion.
*/
- LockSchemaList(schemaidlist);
- PublicationAddSchemas(puboid, schemaidlist, true, NULL);
+ LockSchemaList(tables_schemaidlist);
+ PublicationAddSchemas(puboid,
+ tables_schemaidlist, PUB_OBJTYPE_TABLE,
+ true, NULL);
+ }
+
+ /* sequences added through a schema */
+ if (list_length(sequences_schemaidlist) > 0)
+ {
+ /*
+ * Schema lock is held until the publication is created to prevent
+ * concurrent schema deletion.
+ */
+ LockSchemaList(sequences_schemaidlist);
+ PublicationAddSchemas(puboid,
+ sequences_schemaidlist, PUB_OBJTYPE_SEQUENCE,
+ true, NULL);
}
}
@@ -799,6 +916,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
AccessShareLock);
root_relids = GetPublicationRelations(pubform->oid,
+ PUB_OBJTYPE_TABLE,
PUBLICATION_PART_ROOT);
foreach(lc, root_relids)
@@ -857,6 +975,9 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
values[Anum_pg_publication_pubtruncate - 1] = BoolGetDatum(pubactions.pubtruncate);
replaces[Anum_pg_publication_pubtruncate - 1] = true;
+
+ values[Anum_pg_publication_pubsequence - 1] = BoolGetDatum(pubactions.pubsequence);
+ replaces[Anum_pg_publication_pubsequence - 1] = true;
}
if (publish_via_partition_root_given)
@@ -876,7 +997,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
pubform = (Form_pg_publication) GETSTRUCT(tup);
/* Invalidate the relcache. */
- if (pubform->puballtables)
+ if (pubform->puballtables || pubform->puballsequences)
{
CacheInvalidateRelcacheAll();
}
@@ -892,6 +1013,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
*/
if (root_relids == NIL)
relids = GetPublicationRelations(pubform->oid,
+ PUB_OBJTYPE_TABLE,
PUBLICATION_PART_ALL);
else
{
@@ -905,7 +1027,20 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
lfirst_oid(lc));
}
+ /* tables */
+ schemarelids = GetAllSchemaPublicationRelations(pubform->oid,
+ PUB_OBJTYPE_TABLE,
+ PUBLICATION_PART_ALL);
+ relids = list_concat_unique_oid(relids, schemarelids);
+
+ /* sequences */
+ relids = list_concat_unique_oid(relids,
+ GetPublicationRelations(pubform->oid,
+ PUB_OBJTYPE_SEQUENCE,
+ PUBLICATION_PART_ALL));
+
schemarelids = GetAllSchemaPublicationRelations(pubform->oid,
+ PUB_OBJTYPE_SEQUENCE,
PUBLICATION_PART_ALL);
relids = list_concat_unique_oid(relids, schemarelids);
@@ -960,7 +1095,7 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
if (!tables && stmt->action != AP_SetObjects)
return;
- rels = OpenTableList(tables);
+ rels = OpenRelationList(tables, PUB_OBJTYPE_TABLE);
if (stmt->action == AP_AddObjects)
{
@@ -970,19 +1105,22 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
* Check if the relation is member of the existing schema in the
* publication or member of the schema list specified.
*/
- schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid));
+ schemas = list_concat_copy(schemaidlist,
+ GetPublicationSchemas(pubid,
+ PUB_OBJTYPE_TABLE));
CheckObjSchemaNotAlreadyInPublication(rels, schemas,
PUBLICATIONOBJ_TABLE);
TransformPubWhereClauses(rels, queryString, pubform->pubviaroot);
- PublicationAddTables(pubid, rels, false, stmt);
+ PublicationAddRelations(pubid, rels, false, stmt);
}
else if (stmt->action == AP_DropObjects)
- PublicationDropTables(pubid, rels, false);
+ PublicationDropRelations(pubid, rels, false);
else /* AP_SetObjects */
{
List *oldrelids = GetPublicationRelations(pubid,
+ PUB_OBJTYPE_TABLE,
PUBLICATION_PART_ROOT);
List *delrels = NIL;
ListCell *oldlc;
@@ -1064,18 +1202,18 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
}
/* And drop them. */
- PublicationDropTables(pubid, delrels, true);
+ PublicationDropRelations(pubid, delrels, true);
/*
* Don't bother calculating the difference for adding, we'll catch and
* skip existing ones when doing catalog update.
*/
- PublicationAddTables(pubid, rels, true, stmt);
+ PublicationAddRelations(pubid, rels, true, stmt);
- CloseTableList(delrels);
+ CloseRelationList(delrels);
}
- CloseTableList(rels);
+ CloseRelationList(rels);
}
/*
@@ -1085,7 +1223,8 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
*/
static void
AlterPublicationSchemas(AlterPublicationStmt *stmt,
- HeapTuple tup, List *schemaidlist)
+ HeapTuple tup, List *schemaidlist,
+ char objectType)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
@@ -1107,20 +1246,20 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
List *rels;
List *reloids;
- reloids = GetPublicationRelations(pubform->oid, PUBLICATION_PART_ROOT);
+ reloids = GetPublicationRelations(pubform->oid, objectType, PUBLICATION_PART_ROOT);
rels = OpenRelIdList(reloids);
CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
PUBLICATIONOBJ_TABLES_IN_SCHEMA);
- CloseTableList(rels);
- PublicationAddSchemas(pubform->oid, schemaidlist, false, stmt);
+ CloseRelationList(rels);
+ PublicationAddSchemas(pubform->oid, schemaidlist, objectType, false, stmt);
}
else if (stmt->action == AP_DropObjects)
- PublicationDropSchemas(pubform->oid, schemaidlist, false);
+ PublicationDropSchemas(pubform->oid, schemaidlist, objectType, false);
else /* AP_SetObjects */
{
- List *oldschemaids = GetPublicationSchemas(pubform->oid);
+ List *oldschemaids = GetPublicationSchemas(pubform->oid, objectType);
List *delschemas = NIL;
/* Identify which schemas should be dropped */
@@ -1133,13 +1272,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
LockSchemaList(delschemas);
/* And drop them */
- PublicationDropSchemas(pubform->oid, delschemas, true);
+ PublicationDropSchemas(pubform->oid, delschemas, objectType, true);
/*
* Don't bother calculating the difference for adding, we'll catch and
* skip existing ones when doing catalog update.
*/
- PublicationAddSchemas(pubform->oid, schemaidlist, true, stmt);
+ PublicationAddSchemas(pubform->oid, schemaidlist, objectType, true, stmt);
}
}
@@ -1149,12 +1288,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
*/
static void
CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
- List *tables, List *schemaidlist)
+ List *tables, List *tables_schemaidlist,
+ List *sequences, List *sequences_schemaidlist)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
- schemaidlist && !superuser())
+ (tables_schemaidlist || sequences_schemaidlist) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to add or set schemas")));
@@ -1163,13 +1303,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
* Check that user is allowed to manipulate the publication tables in
* schema
*/
- if (schemaidlist && pubform->puballtables)
+ if (tables_schemaidlist && pubform->puballtables)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables from schema cannot be added to, dropped from, or set on FOR ALL TABLES publications.")));
+ /*
+ * Check that user is allowed to manipulate the publication sequences in
+ * schema
+ */
+ if (sequences_schemaidlist && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
/* Check that user is allowed to manipulate the publication tables. */
if (tables && pubform->puballtables)
ereport(ERROR,
@@ -1177,6 +1328,107 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (sequences && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+ List *sequences, List *schemaidlist)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /*
+ * It is quite possible that for the SET case user has not specified any
+ * tables in which case we need to remove all the existing tables.
+ */
+ if (!sequences && stmt->action != AP_SetObjects)
+ return;
+
+ rels = OpenRelationList(sequences, PUB_OBJTYPE_SEQUENCE);
+
+ if (stmt->action == AP_AddObjects)
+ {
+ List *schemas = NIL;
+
+ /*
+ * Check if the relation is member of the existing schema in the
+ * publication or member of the schema list specified.
+ */
+ schemas = list_concat_copy(schemaidlist,
+ GetPublicationSchemas(pubid,
+ PUB_OBJTYPE_SEQUENCE));
+ CheckObjSchemaNotAlreadyInPublication(rels, schemas,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddRelations(pubid, rels, false, stmt);
+ }
+ else if (stmt->action == AP_DropObjects)
+ PublicationDropRelations(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUB_OBJTYPE_SEQUENCE,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ PublicationRelInfo *oldrel;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ PublicationRelInfo *newpubrel;
+
+ newpubrel = (PublicationRelInfo *) lfirst(newlc);
+ if (RelationGetRelid(newpubrel->relation) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+ /* Not yet in the list, open it and add to the list */
+ if (!found)
+ {
+ oldrel = palloc(sizeof(PublicationRelInfo));
+ oldrel->whereClause = NULL;
+ oldrel->relation = table_open(oldrelid,
+ ShareUpdateExclusiveLock);
+ delrels = lappend(delrels, oldrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropRelations(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddRelations(pubid, rels, true, stmt);
+
+ CloseRelationList(delrels);
+ }
+
+ CloseRelationList(rels);
}
/*
@@ -1214,14 +1466,22 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
AlterPublicationOptions(pstate, stmt, rel, tup);
else
{
- List *relations = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
List *schemaidlist = NIL;
Oid pubid = pubform->oid;
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist,
&schemaidlist);
- CheckAlterPublication(stmt, tup, relations, schemaidlist);
+ CheckAlterPublication(stmt, tup,
+ tables, tables_schemaidlist,
+ sequences, sequences_schemaidlist);
heap_freetuple(tup);
@@ -1249,9 +1509,16 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
errmsg("publication \"%s\" does not exist",
stmt->pubname));
- AlterPublicationTables(stmt, tup, relations, schemaidlist,
+ AlterPublicationTables(stmt, tup, tables, tables_schemaidlist,
pstate->p_sourcetext);
- AlterPublicationSchemas(stmt, tup, schemaidlist);
+
+ AlterPublicationSequences(stmt, tup, sequences, sequences_schemaidlist);
+
+ AlterPublicationSchemas(stmt, tup, tables_schemaidlist,
+ PUB_OBJTYPE_TABLE);
+
+ AlterPublicationSchemas(stmt, tup, sequences_schemaidlist,
+ PUB_OBJTYPE_SEQUENCE);
}
/* Cleanup. */
@@ -1319,7 +1586,7 @@ RemovePublicationById(Oid pubid)
pubform = (Form_pg_publication) GETSTRUCT(tup);
/* Invalidate relcache so that publication info is rebuilt. */
- if (pubform->puballtables)
+ if (pubform->puballtables || pubform->puballsequences)
CacheInvalidateRelcacheAll();
CatalogTupleDelete(rel, &tup->t_self);
@@ -1355,6 +1622,7 @@ RemovePublicationSchemaById(Oid psoid)
* partitions.
*/
schemaRels = GetSchemaPublicationRelations(pubsch->pnnspid,
+ pubsch->pntype,
PUBLICATION_PART_ALL);
InvalidatePublicationRels(schemaRels);
@@ -1397,29 +1665,45 @@ OpenRelIdList(List *relids)
* add them to a publication.
*/
static List *
-OpenTableList(List *tables)
+OpenRelationList(List *rels, char objectType)
{
List *relids = NIL;
- List *rels = NIL;
+ List *result = NIL;
ListCell *lc;
List *relids_with_rf = NIL;
/*
* Open, share-lock, and check all the explicitly-specified relations
*/
- foreach(lc, tables)
+ foreach(lc, rels)
{
PublicationTable *t = lfirst_node(PublicationTable, lc);
bool recurse = t->relation->inh;
Relation rel;
Oid myrelid;
PublicationRelInfo *pub_rel;
+ char myrelkind;
/* Allow query cancel in case this takes a long time */
CHECK_FOR_INTERRUPTS();
rel = table_openrv(t->relation, ShareUpdateExclusiveLock);
myrelid = RelationGetRelid(rel);
+ myrelkind = get_rel_relkind(myrelid);
+
+ /*
+ * Make sure the relkind matches the expected object type. This happens
+ * e.g. when adding a sequence using ADD TABLE or a table using ADD
+ * SEQUENCE).
+ *
+ * XXX We let through unsupported object types (views etc.). Those are
+ * caught later in check_publication_add_relation. A bit weird.
+ *
+ * FIXME Use a better error message.
+ */
+ if (pub_get_object_type_for_relkind(myrelkind) != PUB_OBJTYPE_UNSUPPORTED &&
+ pub_get_object_type_for_relkind(myrelkind) != objectType)
+ elog(ERROR, "object type mismatch");
/*
* Filter out duplicates if user specifies "foo, foo".
@@ -1444,7 +1728,7 @@ OpenTableList(List *tables)
pub_rel = palloc(sizeof(PublicationRelInfo));
pub_rel->relation = rel;
pub_rel->whereClause = t->whereClause;
- rels = lappend(rels, pub_rel);
+ result = lappend(result, pub_rel);
relids = lappend_oid(relids, myrelid);
if (t->whereClause)
@@ -1498,7 +1782,7 @@ OpenTableList(List *tables)
pub_rel->relation = rel;
/* child inherits WHERE clause from parent */
pub_rel->whereClause = t->whereClause;
- rels = lappend(rels, pub_rel);
+ result = lappend(result, pub_rel);
relids = lappend_oid(relids, childrelid);
if (t->whereClause)
@@ -1510,14 +1794,14 @@ OpenTableList(List *tables)
list_free(relids);
list_free(relids_with_rf);
- return rels;
+ return result;
}
/*
* Close all relations in the list.
*/
static void
-CloseTableList(List *rels)
+CloseRelationList(List *rels)
{
ListCell *lc;
@@ -1565,12 +1849,12 @@ LockSchemaList(List *schemalist)
* Add listed tables to the publication.
*/
static void
-PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
+PublicationAddRelations(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt)
{
ListCell *lc;
- Assert(!stmt || !stmt->for_all_tables);
+ Assert(!stmt || !stmt->for_all_objects);
foreach(lc, rels)
{
@@ -1599,7 +1883,7 @@ PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
* Remove listed tables from the publication.
*/
static void
-PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
+PublicationDropRelations(Oid pubid, List *rels, bool missing_ok)
{
ObjectAddress obj;
ListCell *lc;
@@ -1639,19 +1923,19 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
* Add listed schemas to the publication.
*/
static void
-PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
- AlterPublicationStmt *stmt)
+PublicationAddSchemas(Oid pubid, List *schemas, char objectType,
+ bool if_not_exists, AlterPublicationStmt *stmt)
{
ListCell *lc;
- Assert(!stmt || !stmt->for_all_tables);
+ Assert(!stmt || !stmt->for_all_objects);
foreach(lc, schemas)
{
Oid schemaid = lfirst_oid(lc);
ObjectAddress obj;
- obj = publication_add_schema(pubid, schemaid, if_not_exists);
+ obj = publication_add_schema(pubid, schemaid, objectType, if_not_exists);
if (stmt)
{
EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
@@ -1667,7 +1951,7 @@ PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
* Remove listed schemas from the publication.
*/
static void
-PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
+PublicationDropSchemas(Oid pubid, List *schemas, char objectType, bool missing_ok)
{
ObjectAddress obj;
ListCell *lc;
@@ -1677,10 +1961,11 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
{
Oid schemaid = lfirst_oid(lc);
- psid = GetSysCacheOid2(PUBLICATIONNAMESPACEMAP,
+ psid = GetSysCacheOid3(PUBLICATIONNAMESPACEMAP,
Anum_pg_publication_namespace_oid,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pubid));
+ ObjectIdGetDatum(pubid),
+ CharGetDatum(objectType));
if (!OidIsValid(psid))
{
if (missing_ok)
@@ -1735,6 +2020,13 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
NameStr(form->pubname)),
errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+ if (form->puballsequences && !superuser_arg(newOwnerId))
+ ereport(ERROR,
+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("permission denied to change owner of publication \"%s\"",
+ NameStr(form->pubname)),
+ errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser.")));
+
if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index c13cada3bf1..717bb0b2aa9 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -336,6 +336,160 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Update the sequence state by modifying the existing sequence data row.
+ *
+ * This keeps the same relfilenode, so the behavior is non-transactional.
+ */
+static void
+SetSequence_non_transactional(Oid seqrelid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ SeqTable elm;
+ Relation seqrel;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ Form_pg_sequence_data seq;
+
+ /* open and lock sequence */
+ init_sequence(seqrelid, &elm, &seqrel);
+
+ /* lock page' buffer and read tuple */
+ seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+
+ /* check the comment above nextval_internal()'s equivalent call. */
+ if (RelationNeedsWAL(seqrel))
+ {
+ GetTopTransactionId();
+
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
+ /* ready to change the on-disk (or really, in-buffer) tuple */
+ START_CRIT_SECTION();
+
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ MarkBufferDirty(buf);
+
+ /* XLOG stuff */
+ if (RelationNeedsWAL(seqrel))
+ {
+ xl_seq_rec xlrec;
+ XLogRecPtr recptr;
+ Page page = BufferGetPage(buf);
+
+ XLogBeginInsert();
+ XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
+
+ xlrec.node = seqrel->rd_node;
+ xlrec.created = false;
+
+ XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
+ XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
+
+ recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);
+
+ PageSetLSN(page, recptr);
+ }
+
+ END_CRIT_SECTION();
+
+ UnlockReleaseBuffer(buf);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seqrel, NoLock);
+}
+
+/*
+ * Update the sequence state by creating a new relfilenode.
+ *
+ * This creates a new relfilenode, to allow transactional behavior.
+ */
+static void
+SetSequence_transactional(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ SeqTable elm;
+ Relation seqrel;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ Form_pg_sequence_data seq;
+ HeapTuple tuple;
+
+ /* open and lock sequence */
+ init_sequence(seq_relid, &elm, &seqrel);
+
+ /* lock page' buffer and read tuple */
+ seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+
+ /* Copy the existing sequence tuple. */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to update the sequence state (similar to what
+ * ResetSequence does).
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence - this is needed for the
+ * transactional behavior.
+ */
+ RelationSetNewRelfilenode(seqrel, seqrel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seqrel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seqrel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file. This does all the
+ * necessary WAL-logging etc.
+ */
+ fill_seq_with_data(seqrel, tuple);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seqrel, NoLock);
+}
+
+/*
+ * Set a sequence to a specified internal state.
+ *
+ * The change is made transactionally, so that on failure of the current
+ * transaction, the sequence will be restored to its previous state.
+ * We do that by creating a whole new relfilenode for the sequence; so this
+ * works much like the rewriting forms of ALTER TABLE.
+ *
+ * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
+ * which must not be released until end of transaction. Caller is also
+ * responsible for permissions checking.
+ */
+void
+SetSequence(Oid seq_relid, bool transactional, int64 last_value, int64 log_cnt, bool is_called)
+{
+ if (transactional)
+ SetSequence_transactional(seq_relid, last_value, log_cnt, is_called);
+ else
+ SetSequence_non_transactional(seq_relid, last_value, log_cnt, is_called);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*/
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index e16f04626de..14ee492a9ba 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -90,6 +90,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -541,9 +542,9 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
{
char *err;
WalReceiverConn *wrconn;
- List *tables;
+ List *relations;
ListCell *lc;
- char table_state;
+ char sync_state;
/* Try to connect to the publisher. */
wrconn = walrcv_connect(conninfo, true, stmt->subname, &err);
@@ -558,14 +559,17 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
* Set sync state based on if we were asked to do data copy or
* not.
*/
- table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+ sync_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
/*
- * Get the table list from publisher and build local table status
- * info.
+ * Get the table and sequence list from publisher and build
+ * local relation sync status info.
*/
- tables = fetch_table_list(wrconn, publications);
- foreach(lc, tables)
+ relations = fetch_table_list(wrconn, publications);
+ relations = list_concat(relations,
+ fetch_sequence_list(wrconn, publications));
+
+ foreach(lc, relations)
{
RangeVar *rv = (RangeVar *) lfirst(lc);
Oid relid;
@@ -576,7 +580,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
CheckSubscriptionRelkind(get_rel_relkind(relid),
rv->schemaname, rv->relname);
- AddSubscriptionRelState(subid, relid, table_state,
+ AddSubscriptionRelState(subid, relid, sync_state,
InvalidXLogRecPtr);
}
@@ -602,12 +606,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
*
* Note that if tables were specified but copy_data is false
* then it is safe to enable two_phase up-front because those
- * tables are already initially in READY state. When the
- * subscription has no tables, we leave the twophase state as
- * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
+ * relations are already initially in READY state. When the
+ * subscription has no relations, we leave the twophase state
+ * as PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
* PUBLICATION to work.
*/
- if (opts.twophase && !opts.copy_data && tables != NIL)
+ if (opts.twophase && !opts.copy_data && relations != NIL)
twophase_enabled = true;
walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -677,8 +681,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
PG_TRY();
{
- /* Get the table list from publisher. */
+ /* Get the list of relations from publisher. */
pubrel_names = fetch_table_list(wrconn, sub->publications);
+ pubrel_names = list_concat(pubrel_names,
+ fetch_sequence_list(wrconn, sub->publications));
/* Get local table list. */
subrel_states = GetSubscriptionRelations(sub->oid);
@@ -1712,6 +1718,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated sequences from the publisher: %s",
+ res->err)));
+
+ /* Process tables. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 80faae985e9..38bf572e1f6 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -42,6 +42,7 @@
#include "catalog/pg_inherits.h"
#include "catalog/pg_namespace.h"
#include "catalog/pg_opclass.h"
+#include "catalog/pg_publication_namespace.h"
#include "catalog/pg_statistic_ext.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_trigger.h"
@@ -16381,11 +16382,14 @@ AlterTableNamespace(AlterObjectSchemaStmt *stmt, Oid *oldschema)
* Check that setting the relation to a different schema won't result in a
* publication having both a schema and the same schema's table, as this
* is not supported.
+ *
+ * XXX The blocks are almost exactly the same, so maybe combine them into
+ * a single block.
*/
if (stmt->objectType == OBJECT_TABLE)
{
ListCell *lc;
- List *schemaPubids = GetSchemaPublications(nspOid);
+ List *schemaPubids = GetSchemaPublications(nspOid, PUB_OBJTYPE_TABLE);
List *relPubids = GetRelationPublications(RelationGetRelid(rel));
foreach(lc, relPubids)
@@ -16403,6 +16407,27 @@ AlterTableNamespace(AlterObjectSchemaStmt *stmt, Oid *oldschema)
get_publication_name(pubid, false)));
}
}
+ else if (stmt->objectType == OBJECT_SEQUENCE)
+ {
+ ListCell *lc;
+ List *schemaPubids = GetSchemaPublications(nspOid, PUB_OBJTYPE_SEQUENCE);
+ List *relPubids = GetRelationPublications(RelationGetRelid(rel));
+
+ foreach(lc, relPubids)
+ {
+ Oid pubid = lfirst_oid(lc);
+
+ if (list_member_oid(schemaPubids, pubid))
+ ereport(ERROR,
+ errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("cannot move sequence \"%s\" to schema \"%s\"",
+ RelationGetRelationName(rel), stmt->newschema),
+ errdetail("The schema \"%s\" and same schema's sequence \"%s\" cannot be part of the same publication \"%s\".",
+ stmt->newschema,
+ RelationGetRelationName(rel),
+ get_publication_name(pubid, false)));
+ }
+ }
/* common checks on switching namespaces */
CheckSetNamespace(oldNspOid, nspOid);
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 13328141e23..0df7cf58747 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -636,7 +636,9 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION &&
+ relkind != RELKIND_PARTITIONED_TABLE &&
+ relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index d4f8455a2bd..55f720a88f4 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -4862,7 +4862,7 @@ _copyCreatePublicationStmt(const CreatePublicationStmt *from)
COPY_STRING_FIELD(pubname);
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
- COPY_SCALAR_FIELD(for_all_tables);
+ COPY_NODE_FIELD(for_all_objects);
return newnode;
}
@@ -4875,7 +4875,7 @@ _copyAlterPublicationStmt(const AlterPublicationStmt *from)
COPY_STRING_FIELD(pubname);
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
- COPY_SCALAR_FIELD(for_all_tables);
+ COPY_NODE_FIELD(for_all_objects);
COPY_SCALAR_FIELD(action);
return newnode;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index f1002afe7a0..82562eb9b87 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2333,7 +2333,7 @@ _equalCreatePublicationStmt(const CreatePublicationStmt *a,
COMPARE_STRING_FIELD(pubname);
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
- COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_NODE_FIELD(for_all_objects);
return true;
}
@@ -2345,7 +2345,7 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_STRING_FIELD(pubname);
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
- COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_NODE_FIELD(for_all_objects);
COMPARE_SCALAR_FIELD(action);
return true;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 0036c2f9e2d..e327bc735fb 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -446,7 +446,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
transform_element_list transform_type_list
TriggerTransitions TriggerReferencing
vacuum_relation_list opt_vacuum_relation_list
- drop_option_list pub_obj_list
+ drop_option_list pub_obj_list pub_obj_type_list
%type <node> opt_routine_body
%type <groupclause> group_clause
@@ -575,6 +575,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
%type <node> var_value zone_value
%type <rolespec> auth_ident RoleSpec opt_granted_by
%type <publicationobjectspec> PublicationObjSpec
+%type <node> pub_obj_type
%type <keyword> unreserved_keyword type_func_name_keyword
%type <keyword> col_name_keyword reserved_keyword
@@ -9701,12 +9702,9 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
*
* CREATE PUBLICATION FOR ALL TABLES [WITH options]
*
- * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
- *
- * pub_obj is one of:
+ * CREATE PUBLICATION FOR ALL SEQUENCES [WITH options]
*
- * TABLE table [, ...]
- * ALL TABLES IN SCHEMA schema [, ...]
+ * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
*
*****************************************************************************/
@@ -9718,12 +9716,12 @@ CreatePublicationStmt:
n->options = $4;
$$ = (Node *)n;
}
- | CREATE PUBLICATION name FOR ALL TABLES opt_definition
+ | CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
{
CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
n->pubname = $3;
n->options = $7;
- n->for_all_tables = true;
+ n->for_all_objects = $6;
$$ = (Node *)n;
}
| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -9772,6 +9770,26 @@ PublicationObjSpec:
$$->pubobjtype = PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA;
$$->location = @5;
}
+ | SEQUENCE relation_expr
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+ $$->pubtable = makeNode(PublicationTable);
+ $$->pubtable->relation = $2;
+ }
+ | ALL SEQUENCES IN_P SCHEMA ColId
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA;
+ $$->name = $5;
+ $$->location = @5;
+ }
+ | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA;
+ $$->location = @5;
+ }
| ColId OptWhereClause
{
$$ = makeNode(PublicationObjSpec);
@@ -9826,6 +9844,19 @@ pub_obj_list: PublicationObjSpec
{ $$ = lappend($1, $3); }
;
+pub_obj_type: TABLES
+ { $$ = (Node *) makeString("tables"); }
+ | SEQUENCES
+ { $$ = (Node *) makeString("sequences"); }
+ ;
+
+pub_obj_type_list: pub_obj_type
+ { $$ = list_make1($1); }
+ | pub_obj_type_list ',' pub_obj_type
+ { $$ = lappend($1, $3); }
+ ;
+
+
/*****************************************************************************
*
* ALTER PUBLICATION name SET ( options )
@@ -9836,11 +9867,6 @@ pub_obj_list: PublicationObjSpec
*
* ALTER PUBLICATION name SET pub_obj [, ...]
*
- * pub_obj is one of:
- *
- * TABLE table_name [, ...]
- * ALL TABLES IN SCHEMA schema_name [, ...]
- *
*****************************************************************************/
AlterPublicationStmt:
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index c9b0eeefd7e..3dbe85d61a2 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -648,6 +648,56 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint8(out, transactional);
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->transactional = pq_getmsgint(in, 1);
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1203,6 +1253,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 1659964571c..36ecc5d74c4 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -359,6 +360,8 @@ process_syncing_tables_for_sync(XLogRecPtr current_lsn)
*
* If the synchronization position is reached (SYNCDONE), then the table can
* be marked as READY and is no longer tracked.
+ *
+ * XXX Maybe rename this to "_relations_" to cover both tables and sequences.
*/
static void
process_syncing_tables_for_apply(XLogRecPtr current_lsn)
@@ -999,6 +1002,95 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+/*
+ * Fetch sequence data (current state) from the remote node.
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ List *qual = NIL;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel, &qual);
+
+ /* sequences don't have row filters */
+ Assert(!qual);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ /* tablesync sets the sequences in non-transactional way */
+ SetSequence(RelationGetRelid(rel), false, last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
/*
* Determine the tablesync slot name.
*
@@ -1260,10 +1352,21 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ /* Do the right action depending on the relation kind. */
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 82dcffc2db8..f3868b3e1f8 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -143,6 +143,7 @@
#include "catalog/pg_subscription.h"
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_tablespace.h"
+#include "commands/sequence.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
#include "commands/trigger.h"
@@ -1143,6 +1144,57 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * Non-transactional sequence updates should not be part of a remote
+ * transaction. There should not be any running transaction.
+ */
+ Assert((!seq.transactional) || in_remote_transaction);
+ Assert(!(!seq.transactional && in_remote_transaction));
+ Assert(!(!seq.transactional && IsTransactionState()));
+
+ /*
+ * Make sure we're in a transaction (needed by SetSequence). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by SetSequence */
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ /* apply the sequence change */
+ SetSequence(relid, seq.transactional, seq.last_value, seq.log_cnt, seq.is_called);
+
+ /*
+ * Commit the per-stream transaction (we only do this when not in
+ * remote transaction, i.e. for non-transactional sequence updates.
+ */
+ if (!in_remote_transaction)
+ CommitTransactionCommand();
+}
+
/*
* Handle STREAM START message.
*/
@@ -2511,6 +2563,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 5fddab3a3d4..4cdc698cbb3 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -15,6 +15,7 @@
#include "access/tupconvert.h"
#include "catalog/partition.h"
#include "catalog/pg_publication.h"
+#include "catalog/pg_publication_namespace.h"
#include "catalog/pg_publication_rel.h"
#include "commands/defrem.h"
#include "executor/executor.h"
@@ -53,6 +54,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation relation, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -208,6 +213,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -224,6 +230,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -237,6 +244,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -244,6 +252,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -312,6 +321,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -1440,6 +1459,51 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation relation, bool transactional,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(relation))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, relation);
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ relation,
+ xid,
+ sequence_lsn,
+ transactional,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1725,7 +1789,8 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->schema_sent = false;
entry->streamed_txns = NIL;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->new_slot = NULL;
entry->old_slot = NULL;
memset(entry->exprstate, 0, sizeof(entry->exprstate));
@@ -1739,13 +1804,13 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
{
Oid schemaId = get_rel_namespace(relid);
List *pubids = GetRelationPublications(relid);
-
+ char objectType = pub_get_object_type_for_relkind(get_rel_relkind(relid));
/*
* We don't acquire a lock on the namespace system table as we build
* the cache entry using a historic snapshot and all the later changes
* are absorbed while decoding WAL.
*/
- List *schemaPubids = GetSchemaPublications(schemaId);
+ List *schemaPubids = GetSchemaPublications(schemaId, objectType);
ListCell *lc;
Oid publish_as_relid = relid;
int publish_ancestor_level = 0;
@@ -1780,6 +1845,7 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
/*
* Tuple slots cleanups. (Will be rebuilt later if needed).
@@ -1826,9 +1892,11 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
/*
* If this is a FOR ALL TABLES publication, pick the partition root
- * and set the ancestor level accordingly.
+ * and set the ancestor level accordingly. If this is a FOR ALL
+ * SEQUENCES publication, we publish it too but we don't need to
+ * pick the partition root etc.
*/
- if (pub->alltables)
+ if (pub->alltables || pub->allsequences)
{
publish = true;
if (pub->pubviaroot && am_partition)
@@ -1889,6 +1957,7 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
/*
* We want to publish the changes as the top-most ancestor
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index fbd11883e17..9391467960b 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -56,6 +56,7 @@
#include "catalog/pg_opclass.h"
#include "catalog/pg_proc.h"
#include "catalog/pg_publication.h"
+#include "catalog/pg_publication_namespace.h"
#include "catalog/pg_rewrite.h"
#include "catalog/pg_shseclabel.h"
#include "catalog/pg_statistic_ext.h"
@@ -5562,6 +5563,8 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
Oid schemaid;
List *ancestors = NIL;
Oid relid = RelationGetRelid(relation);
+ char relkind = relation->rd_rel->relkind;
+ char objType;
/*
* If not publishable, it publishes no actions. (pgoutput_change() will
@@ -5588,8 +5591,15 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
/* Fetch the publication membership info. */
puboids = GetRelationPublications(relid);
schemaid = RelationGetNamespace(relation);
- puboids = list_concat_unique_oid(puboids, GetSchemaPublications(schemaid));
+ objType = pub_get_object_type_for_relkind(relkind);
+ puboids = list_concat_unique_oid(puboids,
+ GetSchemaPublications(schemaid, objType));
+
+ /*
+ * If this is a partion (and thus a table), lookup all ancestors and track
+ * all publications them too.
+ */
if (relation->rd_rel->relispartition)
{
/* Add publications that the ancestors are in too. */
@@ -5601,12 +5611,23 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
puboids = list_concat_unique_oid(puboids,
GetRelationPublications(ancestor));
+
+ /* include all publications publishing schema of all ancestors */
schemaid = get_rel_namespace(ancestor);
puboids = list_concat_unique_oid(puboids,
- GetSchemaPublications(schemaid));
+ GetSchemaPublications(schemaid,
+ PUB_OBJTYPE_TABLE));
}
}
- puboids = list_concat_unique_oid(puboids, GetAllTablesPublications());
+
+ /*
+ * Consider also FOR ALL TABLES and FOR ALL SEQUENCES publications,
+ * depending on the relkind of the relation.
+ */
+ if (relation->rd_rel->relkind == RELKIND_SEQUENCE)
+ puboids = list_concat_unique_oid(puboids, GetAllSequencesPublications());
+ else
+ puboids = list_concat_unique_oid(puboids, GetAllTablesPublications());
foreach(lc, puboids)
{
@@ -5625,6 +5646,7 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
pubdesc->pubactions.pubupdate |= pubform->pubupdate;
pubdesc->pubactions.pubdelete |= pubform->pubdelete;
pubdesc->pubactions.pubtruncate |= pubform->pubtruncate;
+ pubdesc->pubactions.pubsequence |= pubform->pubsequence;
/*
* Check if all columns referenced in the filter expression are part of
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index f4e7819f1e2..a675877d195 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -630,12 +630,12 @@ static const struct cachedesc cacheinfo[] = {
64
},
{PublicationNamespaceRelationId, /* PUBLICATIONNAMESPACEMAP */
- PublicationNamespacePnnspidPnpubidIndexId,
- 2,
+ PublicationNamespacePnnspidPnpubidPntypeIndexId,
+ 3,
{
Anum_pg_publication_namespace_pnnspid,
Anum_pg_publication_namespace_pnpubid,
- 0,
+ Anum_pg_publication_namespace_pntype,
0
},
64
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e5816c4ccea..00629f08362 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -3814,10 +3814,12 @@ getPublications(Archive *fout, int *numPublications)
int i_pubname;
int i_pubowner;
int i_puballtables;
+ int i_puballsequences;
int i_pubinsert;
int i_pubupdate;
int i_pubdelete;
int i_pubtruncate;
+ int i_pubsequence;
int i_pubviaroot;
int i,
ntups;
@@ -3833,23 +3835,29 @@ getPublications(Archive *fout, int *numPublications)
resetPQExpBuffer(query);
/* Get the publications. */
- if (fout->remoteVersion >= 130000)
+ if (fout->remoteVersion >= 150000)
+ appendPQExpBuffer(query,
+ "SELECT p.tableoid, p.oid, p.pubname, "
+ "p.pubowner, "
+ "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubsequence, p.pubviaroot "
+ "FROM pg_publication p");
+ else if (fout->remoteVersion >= 130000)
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+ "p.puballtables, false AS puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubsequence, p.pubviaroot "
"FROM pg_publication p");
else if (fout->remoteVersion >= 110000)
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+ "p.puballtables, false AS puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubsequence, false AS pubviaroot "
"FROM pg_publication p");
else
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+ "p.puballtables, false AS puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubsequence, false AS pubviaroot "
"FROM pg_publication p");
res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -3861,10 +3869,12 @@ getPublications(Archive *fout, int *numPublications)
i_pubname = PQfnumber(res, "pubname");
i_pubowner = PQfnumber(res, "pubowner");
i_puballtables = PQfnumber(res, "puballtables");
+ i_puballsequences = PQfnumber(res, "puballsequences");
i_pubinsert = PQfnumber(res, "pubinsert");
i_pubupdate = PQfnumber(res, "pubupdate");
i_pubdelete = PQfnumber(res, "pubdelete");
i_pubtruncate = PQfnumber(res, "pubtruncate");
+ i_pubsequence = PQfnumber(res, "pubsequence");
i_pubviaroot = PQfnumber(res, "pubviaroot");
pubinfo = pg_malloc(ntups * sizeof(PublicationInfo));
@@ -3880,6 +3890,8 @@ getPublications(Archive *fout, int *numPublications)
pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
pubinfo[i].puballtables =
(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+ pubinfo[i].puballsequences =
+ (strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
pubinfo[i].pubinsert =
(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
pubinfo[i].pubupdate =
@@ -3888,6 +3900,8 @@ getPublications(Archive *fout, int *numPublications)
(strcmp(PQgetvalue(res, i, i_pubdelete), "t") == 0);
pubinfo[i].pubtruncate =
(strcmp(PQgetvalue(res, i, i_pubtruncate), "t") == 0);
+ pubinfo[i].pubsequence =
+ (strcmp(PQgetvalue(res, i, i_pubsequence), "t") == 0);
pubinfo[i].pubviaroot =
(strcmp(PQgetvalue(res, i, i_pubviaroot), "t") == 0);
@@ -3933,6 +3947,9 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
if (pubinfo->puballtables)
appendPQExpBufferStr(query, " FOR ALL TABLES");
+ if (pubinfo->puballsequences)
+ appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
+
appendPQExpBufferStr(query, " WITH (publish = '");
if (pubinfo->pubinsert)
{
@@ -3967,6 +3984,15 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
first = false;
}
+ if (pubinfo->pubsequence)
+ {
+ if (!first)
+ appendPQExpBufferStr(query, ", ");
+
+ appendPQExpBufferStr(query, "sequence");
+ first = false;
+ }
+
appendPQExpBufferStr(query, "'");
if (pubinfo->pubviaroot)
@@ -4013,6 +4039,7 @@ getPublicationNamespaces(Archive *fout)
int i_oid;
int i_pnpubid;
int i_pnnspid;
+ int i_pntype;
int i,
j,
ntups;
@@ -4024,7 +4051,7 @@ getPublicationNamespaces(Archive *fout)
/* Collect all publication membership info. */
appendPQExpBufferStr(query,
- "SELECT tableoid, oid, pnpubid, pnnspid "
+ "SELECT tableoid, oid, pnpubid, pnnspid, pntype "
"FROM pg_catalog.pg_publication_namespace");
res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4034,6 +4061,7 @@ getPublicationNamespaces(Archive *fout)
i_oid = PQfnumber(res, "oid");
i_pnpubid = PQfnumber(res, "pnpubid");
i_pnnspid = PQfnumber(res, "pnnspid");
+ i_pntype = PQfnumber(res, "pntype");
/* this allocation may be more than we need */
pubsinfo = pg_malloc(ntups * sizeof(PublicationSchemaInfo));
@@ -4043,6 +4071,7 @@ getPublicationNamespaces(Archive *fout)
{
Oid pnpubid = atooid(PQgetvalue(res, i, i_pnpubid));
Oid pnnspid = atooid(PQgetvalue(res, i, i_pnnspid));
+ char pntype = PQgetvalue(res, i, i_pntype)[0];
PublicationInfo *pubinfo;
NamespaceInfo *nspinfo;
@@ -4074,6 +4103,7 @@ getPublicationNamespaces(Archive *fout)
pubsinfo[j].dobj.name = nspinfo->dobj.name;
pubsinfo[j].publication = pubinfo;
pubsinfo[j].pubschema = nspinfo;
+ pubsinfo[j].pubtype = pntype;
/* Decide whether we want to dump it */
selectDumpablePublicationObject(&(pubsinfo[j].dobj), fout);
@@ -4207,7 +4237,11 @@ dumpPublicationNamespace(Archive *fout, const PublicationSchemaInfo *pubsinfo)
query = createPQExpBuffer();
appendPQExpBuffer(query, "ALTER PUBLICATION %s ", fmtId(pubinfo->dobj.name));
- appendPQExpBuffer(query, "ADD ALL TABLES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
+
+ if (pubsinfo->pubtype == 't')
+ appendPQExpBuffer(query, "ADD ALL TABLES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
+ else
+ appendPQExpBuffer(query, "ADD ALL SEQUENCES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
/*
* There is no point in creating drop query as the drop is done by schema
@@ -4240,6 +4274,7 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
TableInfo *tbinfo = pubrinfo->pubtable;
PQExpBuffer query;
char *tag;
+ char *description;
/* Do nothing in data-only dump */
if (dopt->dataOnly)
@@ -4249,10 +4284,22 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
query = createPQExpBuffer();
- appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD TABLE ONLY",
- fmtId(pubinfo->dobj.name));
+ if (tbinfo->relkind == RELKIND_SEQUENCE)
+ {
+ appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD SEQUENCE",
+ fmtId(pubinfo->dobj.name));
+ description = "PUBLICATION SEQUENCE";
+ }
+ else
+ {
+ appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD TABLE ONLY",
+ fmtId(pubinfo->dobj.name));
+ description = "PUBLICATION TABLE";
+ }
+
appendPQExpBuffer(query, " %s",
fmtQualifiedDumpable(tbinfo));
+
if (pubrinfo->pubrelqual)
{
/*
@@ -4275,7 +4322,7 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
ARCHIVE_OPTS(.tag = tag,
.namespace = tbinfo->dobj.namespace->dobj.name,
.owner = pubinfo->rolname,
- .description = "PUBLICATION TABLE",
+ .description = description,
.section = SECTION_POST_DATA,
.createStmt = query->data));
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 772dc0cf7a2..893725d121f 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -615,10 +615,12 @@ typedef struct _PublicationInfo
DumpableObject dobj;
const char *rolname;
bool puballtables;
+ bool puballsequences;
bool pubinsert;
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
bool pubviaroot;
} PublicationInfo;
@@ -643,6 +645,7 @@ typedef struct _PublicationSchemaInfo
DumpableObject dobj;
PublicationInfo *publication;
NamespaceInfo *pubschema;
+ char pubtype;
} PublicationSchemaInfo;
/*
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index fd1052e5db8..19f908f6006 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2358,7 +2358,7 @@ my %tests = (
create_order => 50,
create_sql => 'CREATE PUBLICATION pub1;',
regexp => qr/^
- \QCREATE PUBLICATION pub1 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub1 WITH (publish = 'insert, update, delete, truncate, sequence');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
@@ -2378,16 +2378,27 @@ my %tests = (
create_order => 50,
create_sql => 'CREATE PUBLICATION pub3;',
regexp => qr/^
- \QCREATE PUBLICATION pub3 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub3 WITH (publish = 'insert, update, delete, truncate, sequence');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
'CREATE PUBLICATION pub4' => {
create_order => 50,
- create_sql => 'CREATE PUBLICATION pub4;',
+ create_sql => 'CREATE PUBLICATION pub4
+ FOR ALL SEQUENCES
+ WITH (publish = \'\');',
regexp => qr/^
- \QCREATE PUBLICATION pub4 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub4 FOR ALL SEQUENCES WITH (publish = '');\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ },
+
+ 'CREATE PUBLICATION pub5' => {
+ create_order => 50,
+ create_sql => 'CREATE PUBLICATION pub5;',
+ regexp => qr/^
+ \QCREATE PUBLICATION pub5 WITH (publish = 'insert, update, delete, truncate, sequence');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
@@ -2474,6 +2485,27 @@ my %tests = (
unlike => { exclude_dump_test_schema => 1, },
},
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test' => {
+ create_order => 51,
+ create_sql =>
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test;',
+ regexp => qr/^
+ \QALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test;\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
+ },
+
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public' => {
+ create_order => 52,
+ create_sql =>
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public;',
+ regexp => qr/^
+ \QALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public;\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ },
+
'CREATE SCHEMA public' => {
regexp => qr/^CREATE SCHEMA public;/m,
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 714097cad1b..b8a532a45f7 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1633,28 +1633,19 @@ describeOneTableDetails(const char *schemaname,
if (tableinfo.relkind == RELKIND_SEQUENCE)
{
PGresult *result = NULL;
- printQueryOpt myopt = pset.popt;
- char *footers[2] = {NULL, NULL};
if (pset.sversion >= 100000)
{
printfPQExpBuffer(&buf,
- "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
- " seqstart AS \"%s\",\n"
- " seqmin AS \"%s\",\n"
- " seqmax AS \"%s\",\n"
- " seqincrement AS \"%s\",\n"
- " CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
- " seqcache AS \"%s\"\n",
- gettext_noop("Type"),
- gettext_noop("Start"),
- gettext_noop("Minimum"),
- gettext_noop("Maximum"),
- gettext_noop("Increment"),
+ "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+ " seqstart,\n"
+ " seqmin,\n"
+ " seqmax,\n"
+ " seqincrement,\n"
+ " CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+ " seqcache\n",
gettext_noop("yes"),
- gettext_noop("no"),
- gettext_noop("Cycles?"),
- gettext_noop("Cache"));
+ gettext_noop("no"));
appendPQExpBuffer(&buf,
"FROM pg_catalog.pg_sequence\n"
"WHERE seqrelid = '%s';",
@@ -1663,22 +1654,15 @@ describeOneTableDetails(const char *schemaname,
else
{
printfPQExpBuffer(&buf,
- "SELECT 'bigint' AS \"%s\",\n"
- " start_value AS \"%s\",\n"
- " min_value AS \"%s\",\n"
- " max_value AS \"%s\",\n"
- " increment_by AS \"%s\",\n"
- " CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
- " cache_value AS \"%s\"\n",
- gettext_noop("Type"),
- gettext_noop("Start"),
- gettext_noop("Minimum"),
- gettext_noop("Maximum"),
- gettext_noop("Increment"),
+ "SELECT 'bigint',\n"
+ " start_value,\n"
+ " min_value,\n"
+ " max_value,\n"
+ " increment_by,\n"
+ " CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+ " cache_value\n",
gettext_noop("yes"),
- gettext_noop("no"),
- gettext_noop("Cycles?"),
- gettext_noop("Cache"));
+ gettext_noop("no"));
appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
/* must be separate because fmtId isn't reentrant */
appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1688,6 +1672,51 @@ describeOneTableDetails(const char *schemaname,
if (!res)
goto error_return;
+ numrows = PQntuples(res);
+
+ /* XXX reset to use expanded output for sequences (maybe we should
+ * keep this disabled, just like for tables?) */
+ myopt.expanded = pset.popt.topt.expanded;
+
+ printTableInit(&cont, &myopt, title.data, 7, numrows);
+ printTableInitialized = true;
+
+ printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+ schemaname, relationname);
+
+ printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+ printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+ printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+ /* Generate table cells to be printed */
+ for (i = 0; i < numrows; i++)
+ {
+ /* Type */
+ printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+ /* Start */
+ printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+ /* Minimum */
+ printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+ /* Maximum */
+ printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+ /* Increment */
+ printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+ /* Cycles? */
+ printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+ /* Cache */
+ printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+ }
+
/* Footer information about a sequence */
/* Get the column that owns this sequence */
@@ -1721,29 +1750,63 @@ describeOneTableDetails(const char *schemaname,
switch (PQgetvalue(result, 0, 1)[0])
{
case 'a':
- footers[0] = psprintf(_("Owned by: %s"),
- PQgetvalue(result, 0, 0));
+ printTableAddFooter(&cont,
+ psprintf(_("Owned by: %s"),
+ PQgetvalue(result, 0, 0)));
break;
case 'i':
- footers[0] = psprintf(_("Sequence for identity column: %s"),
- PQgetvalue(result, 0, 0));
+ printTableAddFooter(&cont,
+ psprintf(_("Sequence for identity column: %s"),
+ PQgetvalue(result, 0, 0)));
break;
}
}
PQclear(result);
- printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
- schemaname, relationname);
+ /* print any publications */
+ if (pset.sversion >= 150000)
+ {
+ int tuples = 0;
- myopt.footers = footers;
- myopt.topt.default_footer = false;
- myopt.title = title.data;
- myopt.translate_header = true;
+ printfPQExpBuffer(&buf,
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ " JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
+ " JOIN pg_catalog.pg_class pc ON pc.relnamespace = pn.pnnspid\n"
+ "WHERE pc.oid ='%s' and pn.pntype = 's' and pg_catalog.pg_relation_is_publishable('%s')\n"
+ "UNION\n"
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ " JOIN pg_catalog.pg_publication_rel pr ON p.oid = pr.prpubid\n"
+ "WHERE pr.prrelid = '%s'\n"
+ "UNION\n"
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+ "ORDER BY 1;",
+ oid, oid, oid, oid);
- printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+ result = PSQLexec(buf.data);
+ if (!result)
+ goto error_return;
+ else
+ tuples = PQntuples(result);
+
+ if (tuples > 0)
+ printTableAddFooter(&cont, _("Publications:"));
+
+ /* Might be an empty set - that's ok */
+ for (i = 0; i < tuples; i++)
+ {
+ printfPQExpBuffer(&buf, " \"%s\"",
+ PQgetvalue(result, i, 0));
+
+ printTableAddFooter(&cont, buf.data);
+ }
+ PQclear(result);
+ }
- if (footers[0])
- free(footers[0]);
+ printTable(&cont, pset.queryFout, false, pset.logfile);
retval = true;
goto error_return; /* not an error, just return early */
@@ -1970,6 +2033,11 @@ describeOneTableDetails(const char *schemaname,
for (i = 0; i < cols; i++)
printTableAddHeader(&cont, headers[i], true, 'l');
+ res = PSQLexec(buf.data);
+ if (!res)
+ goto error_return;
+ numrows = PQntuples(res);
+
/* Generate table cells to be printed */
for (i = 0; i < numrows; i++)
{
@@ -2895,7 +2963,7 @@ describeOneTableDetails(const char *schemaname,
"FROM pg_catalog.pg_publication p\n"
" JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
" JOIN pg_catalog.pg_class pc ON pc.relnamespace = pn.pnnspid\n"
- "WHERE pc.oid ='%s' and pg_catalog.pg_relation_is_publishable('%s')\n"
+ "WHERE pc.oid ='%s' and pn.pntype = 't' and pg_catalog.pg_relation_is_publishable('%s')\n"
"UNION\n"
"SELECT pubname\n"
" , pg_get_expr(pr.prqual, c.oid)\n"
@@ -4785,7 +4853,7 @@ listSchemas(const char *pattern, bool verbose, bool showSystem)
int i;
printfPQExpBuffer(&buf,
- "SELECT pubname \n"
+ "SELECT pubname, (CASE WHEN pntype = 't' THEN 'tables' ELSE 'sequences' END) AS pubtype\n"
"FROM pg_catalog.pg_publication p\n"
" JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
" JOIN pg_catalog.pg_namespace n ON n.oid = pn.pnnspid \n"
@@ -4814,8 +4882,9 @@ listSchemas(const char *pattern, bool verbose, bool showSystem)
/* Might be an empty set - that's ok */
for (i = 0; i < pub_schema_tuples; i++)
{
- printfPQExpBuffer(&buf, " \"%s\"",
- PQgetvalue(result, i, 0));
+ printfPQExpBuffer(&buf, " \"%s\" (%s)",
+ PQgetvalue(result, i, 0),
+ PQgetvalue(result, i, 1));
footers[i + 1] = pg_strdup(buf.data);
}
@@ -5820,7 +5889,7 @@ listPublications(const char *pattern)
PQExpBufferData buf;
PGresult *res;
printQueryOpt myopt = pset.popt;
- static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+ static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
if (pset.sversion < 100000)
{
@@ -5834,23 +5903,45 @@ listPublications(const char *pattern)
initPQExpBuffer(&buf);
- printfPQExpBuffer(&buf,
- "SELECT pubname AS \"%s\",\n"
- " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
- " puballtables AS \"%s\",\n"
- " pubinsert AS \"%s\",\n"
- " pubupdate AS \"%s\",\n"
- " pubdelete AS \"%s\"",
- gettext_noop("Name"),
- gettext_noop("Owner"),
- gettext_noop("All tables"),
- gettext_noop("Inserts"),
- gettext_noop("Updates"),
- gettext_noop("Deletes"));
+ if (pset.sversion >= 150000)
+ printfPQExpBuffer(&buf,
+ "SELECT pubname AS \"%s\",\n"
+ " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+ " puballtables AS \"%s\",\n"
+ " puballsequences AS \"%s\",\n"
+ " pubinsert AS \"%s\",\n"
+ " pubupdate AS \"%s\",\n"
+ " pubdelete AS \"%s\"",
+ gettext_noop("Name"),
+ gettext_noop("Owner"),
+ gettext_noop("All tables"),
+ gettext_noop("All sequences"),
+ gettext_noop("Inserts"),
+ gettext_noop("Updates"),
+ gettext_noop("Deletes"));
+ else
+ printfPQExpBuffer(&buf,
+ "SELECT pubname AS \"%s\",\n"
+ " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+ " puballtables AS \"%s\",\n"
+ " pubinsert AS \"%s\",\n"
+ " pubupdate AS \"%s\",\n"
+ " pubdelete AS \"%s\"",
+ gettext_noop("Name"),
+ gettext_noop("Owner"),
+ gettext_noop("All tables"),
+ gettext_noop("Inserts"),
+ gettext_noop("Updates"),
+ gettext_noop("Deletes"));
+
if (pset.sversion >= 110000)
appendPQExpBuffer(&buf,
",\n pubtruncate AS \"%s\"",
gettext_noop("Truncates"));
+ if (pset.sversion >= 150000)
+ appendPQExpBuffer(&buf,
+ ",\n pubsequence AS \"%s\"",
+ gettext_noop("Sequences"));
if (pset.sversion >= 130000)
appendPQExpBuffer(&buf,
",\n pubviaroot AS \"%s\"",
@@ -5936,6 +6027,7 @@ describePublications(const char *pattern)
PGresult *res;
bool has_pubtruncate;
bool has_pubviaroot;
+ bool has_pubsequence;
PQExpBufferData title;
printTableContent cont;
@@ -5952,6 +6044,7 @@ describePublications(const char *pattern)
has_pubtruncate = (pset.sversion >= 110000);
has_pubviaroot = (pset.sversion >= 130000);
+ has_pubsequence = (pset.sversion >= 150000);
initPQExpBuffer(&buf);
@@ -5959,12 +6052,17 @@ describePublications(const char *pattern)
"SELECT oid, pubname,\n"
" pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
" puballtables, pubinsert, pubupdate, pubdelete");
+
if (has_pubtruncate)
appendPQExpBufferStr(&buf,
", pubtruncate");
if (has_pubviaroot)
appendPQExpBufferStr(&buf,
", pubviaroot");
+ if (has_pubsequence)
+ appendPQExpBufferStr(&buf,
+ ", puballsequences, pubsequence");
+
appendPQExpBufferStr(&buf,
"\nFROM pg_catalog.pg_publication\n");
@@ -6005,6 +6103,7 @@ describePublications(const char *pattern)
char *pubid = PQgetvalue(res, i, 0);
char *pubname = PQgetvalue(res, i, 1);
bool puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
+ bool puballsequences = strcmp(PQgetvalue(res, i, 9), "t") == 0;
printTableOpt myopt = pset.popt.topt;
if (has_pubtruncate)
@@ -6012,29 +6111,43 @@ describePublications(const char *pattern)
if (has_pubviaroot)
ncols++;
+ /* sequences have two extra columns (puballsequences, pubsequences) */
+ if (has_pubsequence)
+ ncols += 2;
+
initPQExpBuffer(&title);
printfPQExpBuffer(&title, _("Publication %s"), pubname);
printTableInit(&cont, &myopt, title.data, ncols, nrows);
printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+ if (has_pubsequence)
+ printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
if (has_pubtruncate)
printTableAddHeader(&cont, gettext_noop("Truncates"), true, align);
+ if (has_pubsequence)
+ printTableAddHeader(&cont, gettext_noop("Sequences"), true, align);
if (has_pubviaroot)
printTableAddHeader(&cont, gettext_noop("Via root"), true, align);
- printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false); /* owner */
+ printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false); /* all tables */
+
+ if (has_pubsequence)
+ printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false); /* all sequences */
+
+ printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false); /* insert */
+ printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false); /* update */
+ printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false); /* delete */
if (has_pubtruncate)
- printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false); /* truncate */
+ if (has_pubsequence)
+ printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false); /* sequence */
if (has_pubviaroot)
- printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false); /* via root */
if (!puballtables)
{
@@ -6054,6 +6167,7 @@ describePublications(const char *pattern)
"WHERE c.relnamespace = n.oid\n"
" AND c.oid = pr.prrelid\n"
" AND pr.prpubid = '%s'\n"
+ " AND c.relkind != 'S'\n" /* exclude sequences */
"ORDER BY 1,2", pubid);
if (!addFooterToPublicationDesc(&buf, "Tables:", false, &cont))
goto error_return;
@@ -6065,7 +6179,7 @@ describePublications(const char *pattern)
"SELECT n.nspname\n"
"FROM pg_catalog.pg_namespace n\n"
" JOIN pg_catalog.pg_publication_namespace pn ON n.oid = pn.pnnspid\n"
- "WHERE pn.pnpubid = '%s'\n"
+ "WHERE pn.pnpubid = '%s' AND pn.pntype = 't'\n"
"ORDER BY 1", pubid);
if (!addFooterToPublicationDesc(&buf, "Tables from schemas:",
true, &cont))
@@ -6073,6 +6187,37 @@ describePublications(const char *pattern)
}
}
+ if (!puballsequences)
+ {
+ /* Get the tables for the specified publication */
+ printfPQExpBuffer(&buf,
+ "SELECT n.nspname, c.relname, NULL\n"
+ "FROM pg_catalog.pg_class c,\n"
+ " pg_catalog.pg_namespace n,\n"
+ " pg_catalog.pg_publication_rel pr\n"
+ "WHERE c.relnamespace = n.oid\n"
+ " AND c.oid = pr.prrelid\n"
+ " AND pr.prpubid = '%s'\n"
+ " AND c.relkind = 'S'\n" /* only sequences */
+ "ORDER BY 1,2", pubid);
+ if (!addFooterToPublicationDesc(&buf, "Sequences:", false, &cont))
+ goto error_return;
+
+ if (pset.sversion >= 150000)
+ {
+ /* Get the schemas for the specified publication */
+ printfPQExpBuffer(&buf,
+ "SELECT n.nspname\n"
+ "FROM pg_catalog.pg_namespace n\n"
+ " JOIN pg_catalog.pg_publication_namespace pn ON n.oid = pn.pnnspid\n"
+ "WHERE pn.pnpubid = '%s' AND pn.pntype = 's'\n"
+ "ORDER BY 1", pubid);
+ if (!addFooterToPublicationDesc(&buf, "Sequences from schemas:",
+ true, &cont))
+ goto error_return;
+ }
+ }
+
printTable(&cont, pset.queryFout, false, pset.logfile);
printTableCleanup(&cont);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 5c064595a97..e59bd8302d6 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1815,11 +1815,15 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") ||
(HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") &&
ends_with(prev_wd, ',')))
COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "SEQUENCE") ||
+ (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "SEQUENCE") &&
+ ends_with(prev_wd, ',')))
+ COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_sequences);
/*
* "ALTER PUBLICATION <name> SET TABLE <name> WHERE (" - complete with
* table attributes
@@ -1838,11 +1842,11 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH(",");
/* ALTER PUBLICATION <name> DROP */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
- COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE");
- else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
+ COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES|SEQUENCES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY_PLUS(Query_for_list_of_schemas
" AND nspname NOT LIKE E'pg\\\\_%%'",
"CURRENT_SCHEMA");
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index d8e8715ed1c..699bd0aa3e3 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11540,6 +11540,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index fe773cf9b7d..97f26208e1d 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct PublicationDesc
@@ -92,6 +102,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -122,14 +133,15 @@ typedef enum PublicationPartOpt
PUBLICATION_PART_ALL,
} PublicationPartOpt;
-extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
+extern List *GetPublicationRelations(Oid pubid, char objectType,
+ PublicationPartOpt pub_partopt);
extern List *GetAllTablesPublications(void);
extern List *GetAllTablesPublicationRelations(bool pubviaroot);
-extern List *GetPublicationSchemas(Oid pubid);
-extern List *GetSchemaPublications(Oid schemaid);
-extern List *GetSchemaPublicationRelations(Oid schemaid,
+extern List *GetPublicationSchemas(Oid pubid, char objectType);
+extern List *GetSchemaPublications(Oid schemaid, char objectType);
+extern List *GetSchemaPublicationRelations(Oid schemaid, char objectType,
PublicationPartOpt pub_partopt);
-extern List *GetAllSchemaPublicationRelations(Oid puboid,
+extern List *GetAllSchemaPublicationRelations(Oid puboid, char objectType,
PublicationPartOpt pub_partopt);
extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
@@ -137,11 +149,15 @@ extern List *GetPubPartitionOptionRelations(List *result,
extern Oid GetTopMostAncestorInPublication(Oid puboid, List *ancestors,
int *ancestor_level);
+extern List *GetAllSequencesPublications(void);
+extern List *GetAllSequencesPublicationRelations(void);
+
extern bool is_publishable_relation(Relation rel);
extern bool is_schema_publication(Oid pubid);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *pri,
bool if_not_exists);
extern ObjectAddress publication_add_schema(Oid pubid, Oid schemaid,
+ char objectType,
bool if_not_exists);
extern Oid get_publication_oid(const char *pubname, bool missing_ok);
diff --git a/src/include/catalog/pg_publication_namespace.h b/src/include/catalog/pg_publication_namespace.h
index e4306da02e7..7340a1ec646 100644
--- a/src/include/catalog/pg_publication_namespace.h
+++ b/src/include/catalog/pg_publication_namespace.h
@@ -32,6 +32,7 @@ CATALOG(pg_publication_namespace,8901,PublicationNamespaceRelationId)
Oid oid; /* oid */
Oid pnpubid BKI_LOOKUP(pg_publication); /* Oid of the publication */
Oid pnnspid BKI_LOOKUP(pg_namespace); /* Oid of the schema */
+ char pntype; /* object type to include */
} FormData_pg_publication_namespace;
/* ----------------
@@ -42,6 +43,13 @@ CATALOG(pg_publication_namespace,8901,PublicationNamespaceRelationId)
typedef FormData_pg_publication_namespace *Form_pg_publication_namespace;
DECLARE_UNIQUE_INDEX_PKEY(pg_publication_namespace_oid_index, 8902, PublicationNamespaceObjectIndexId, on pg_publication_namespace using btree(oid oid_ops));
-DECLARE_UNIQUE_INDEX(pg_publication_namespace_pnnspid_pnpubid_index, 8903, PublicationNamespacePnnspidPnpubidIndexId, on pg_publication_namespace using btree(pnnspid oid_ops, pnpubid oid_ops));
+DECLARE_UNIQUE_INDEX(pg_publication_namespace_pnnspid_pnpubid_pntype_index, 8903, PublicationNamespacePnnspidPnpubidPntypeIndexId, on pg_publication_namespace using btree(pnnspid oid_ops, pnpubid oid_ops, pntype char_ops));
+
+/* object type to include from a schema, maps to relkind */
+#define PUB_OBJTYPE_TABLE 't' /* table (regular or partitioned) */
+#define PUB_OBJTYPE_SEQUENCE 's' /* sequence object */
+#define PUB_OBJTYPE_UNSUPPORTED 'u' /* used for non-replicated types */
+
+extern char pub_get_object_type_for_relkind(char relkind);
#endif /* PG_PUBLICATION_NAMESPACE_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9fecc41954e..5bab90db8e0 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,6 +60,7 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid seq_relid, bool transactional, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
extern void seq_redo(XLogReaderState *rptr);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 2f618cb2292..cb1fcc0ee31 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3665,6 +3665,10 @@ typedef enum PublicationObjSpecType
PUBLICATIONOBJ_TABLES_IN_SCHEMA, /* All tables in schema */
PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA, /* All tables in first element of
* search_path */
+ PUBLICATIONOBJ_SEQUENCE, /* Sequence type */
+ PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA, /* Sequences in schema type */
+ PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA, /* Get the first element of
+ * search_path */
PUBLICATIONOBJ_CONTINUATION /* Continuation of previous type */
} PublicationObjSpecType;
@@ -3683,7 +3687,7 @@ typedef struct CreatePublicationStmt
char *pubname; /* Name of the publication */
List *options; /* List of DefElem nodes */
List *pubobjects; /* Optional list of publication objects */
- bool for_all_tables; /* Special publication for all tables in db */
+ List *for_all_objects; /* Special publication for all objects in db */
} CreatePublicationStmt;
typedef enum AlterPublicationAction
@@ -3706,7 +3710,7 @@ typedef struct AlterPublicationStmt
* objects.
*/
List *pubobjects; /* Optional list of publication objects */
- bool for_all_tables; /* Special publication for all tables in db */
+ List *for_all_objects; /* Special publication for all objects in db */
AlterPublicationAction action; /* What action to perform with the given
* objects */
} AlterPublicationStmt;
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 4d2c881644a..415757d8a2d 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -61,6 +61,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'X', /* FIXME change */
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -118,6 +119,18 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ bool transactional;
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -230,6 +243,12 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ bool transactional,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index eafedd610a5..f4e9f35d09d 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -29,6 +29,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/test/regress/expected/object_address.out b/src/test/regress/expected/object_address.out
index 4117fc27c9a..c95d44b3db9 100644
--- a/src/test/regress/expected/object_address.out
+++ b/src/test/regress/expected/object_address.out
@@ -46,6 +46,7 @@ CREATE TRANSFORM FOR int LANGUAGE SQL (
SET client_min_messages = 'ERROR';
CREATE PUBLICATION addr_pub FOR TABLE addr_nsp.gentable;
CREATE PUBLICATION addr_pub_schema FOR ALL TABLES IN SCHEMA addr_nsp;
+CREATE PUBLICATION addr_pub_schema2 FOR ALL SEQUENCES IN SCHEMA addr_nsp;
RESET client_min_messages;
CREATE SUBSCRIPTION regress_addr_sub CONNECTION '' PUBLICATION bar WITH (connect = false, slot_name = NONE);
WARNING: tables were not subscribed, you will have to run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to subscribe the tables
@@ -428,7 +429,8 @@ WITH objects (type, name, args) AS (VALUES
('transform', '{int}', '{sql}'),
('access method', '{btree}', '{}'),
('publication', '{addr_pub}', '{}'),
- ('publication namespace', '{addr_nsp}', '{addr_pub_schema}'),
+ ('publication namespace', '{addr_nsp}', '{addr_pub_schema,t}'),
+ ('publication namespace', '{addr_nsp}', '{addr_pub_schema2,s}'),
('publication relation', '{addr_nsp, gentable}', '{addr_pub}'),
('subscription', '{regress_addr_sub}', '{}'),
('statistics object', '{addr_nsp, gentable_stat}', '{}')
@@ -492,8 +494,9 @@ SELECT (pg_identify_object(addr1.classid, addr1.objid, addr1.objsubid)).*,
subscription | | regress_addr_sub | regress_addr_sub | t
publication | | addr_pub | addr_pub | t
publication relation | | | addr_nsp.gentable in publication addr_pub | t
- publication namespace | | | addr_nsp in publication addr_pub_schema | t
-(50 rows)
+ publication namespace | | | addr_nsp in publication addr_pub_schema type t | t
+ publication namespace | | | addr_nsp in publication addr_pub_schema2 type s | t
+(51 rows)
---
--- Cleanup resources
@@ -506,6 +509,7 @@ drop cascades to server integer
drop cascades to user mapping for regress_addr_user on server integer
DROP PUBLICATION addr_pub;
DROP PUBLICATION addr_pub_schema;
+DROP PUBLICATION addr_pub_schema2;
DROP SUBSCRIPTION regress_addr_sub;
DROP SCHEMA addr_nsp CASCADE;
NOTICE: drop cascades to 14 other objects
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4e191c120ac..58214ee742e 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR: conflicting or redundant options
LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
^
\dRp
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f | t | f | f | f | f
- testpub_default | regress_publication_user | f | f | t | f | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f | f | t | f | f | f | f | f
+ testpub_default | regress_publication_user | f | f | f | t | f | f | f | f
(2 rows)
-ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
+ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete, sequence');
\dRp
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f | t | f | f | f | f
- testpub_default | regress_publication_user | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f | f | t | f | f | f | f | f
+ testpub_default | regress_publication_user | f | f | t | t | t | f | t | f
(2 rows)
--- adding tables
@@ -61,6 +61,9 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
ERROR: publication "testpub_foralltables" is defined as FOR ALL TABLES
DETAIL: Tables cannot be added to or dropped from FOR ALL TABLES publications.
+-- fail - can't add a table using ADD SEQUENCE command
+ALTER PUBLICATION testpub_foralltables ADD SEQUENCE testpub_tbl2;
+ERROR: object type mismatch
-- fail - can't drop from all tables publication
ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
ERROR: publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -87,10 +90,10 @@ RESET client_min_messages;
-- should be able to add schema to 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable ADD ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl1"
Tables from schemas:
@@ -99,20 +102,20 @@ Tables from schemas:
-- should be able to drop schema from 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable DROP ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl1"
-- should be able to set schema to 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable SET ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test"
@@ -134,10 +137,10 @@ ERROR: relation "testpub_nopk" is not part of the publication
-- should be able to set table to schema publication
ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
\dRp+ testpub_forschema
- Publication testpub_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test.testpub_nopk"
@@ -159,10 +162,10 @@ Publications:
"testpub_foralltables"
\dRp+ testpub_foralltables
- Publication testpub_foralltables
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t | t | t | f | f | f
+ Publication testpub_foralltables
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | t | f | t | t | f | f | f | f
(1 row)
DROP TABLE testpub_tbl2;
@@ -174,24 +177,527 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
RESET client_min_messages;
\dRp+ testpub3
- Publication testpub3
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl3"
"public.testpub_tbl3a"
\dRp+ testpub4
- Publication testpub4
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub4
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl3"
DROP TABLE testpub_tbl3, testpub_tbl3a;
DROP PUBLICATION testpub3, testpub4;
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES WITH (publish = 'sequence');
+RESET client_min_messages;
+ALTER PUBLICATION testpub_forallsequences SET (publish = 'insert, sequence');
+CREATE SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences ADD SEQUENCE testpub_seq2;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't drop from all sequences publication
+ALTER PUBLICATION testpub_forallsequences DROP SEQUENCE testpub_seq2;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences SET SEQUENCE pub_test.testpub_seq1;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't add schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences ADD ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+-- fail - can't drop schema from 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences DROP ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+-- fail - can't set schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences SET ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forsequence FOR SEQUENCE testpub_seq0;
+RESET client_min_messages;
+-- should be able to add schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence ADD ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "public.testpub_seq0"
+Sequences from schemas:
+ "pub_test"
+
+-- fail - can't add sequence from the schema we already added
+ALTER PUBLICATION testpub_forsequence ADD SEQUENCE pub_test.testpub_seq1;
+ERROR: cannot add relation "pub_test.testpub_seq1" to publication
+DETAIL: Sequence's schema "pub_test" is already part of the publication or part of the specified schema list.
+-- fail - can't add sequence using ADD TABLE command
+ALTER PUBLICATION testpub_forsequence ADD TABLE pub_test.testpub_seq1;
+ERROR: object type mismatch
+-- should be able to drop schema from 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "public.testpub_seq0"
+
+-- should be able to set schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence SET ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences from schemas:
+ "pub_test"
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forschema FOR ALL SEQUENCES IN SCHEMA pub_test;
+RESET client_min_messages;
+-- fail - can't create publication with schema and sequence of the same schema
+CREATE PUBLICATION testpub_for_seq_schema FOR ALL SEQUENCES IN SCHEMA pub_test, SEQUENCE pub_test.testpub_seq1;
+ERROR: cannot add relation "pub_test.testpub_seq1" to publication
+DETAIL: Sequence's schema "pub_test" is already part of the publication or part of the specified schema list.
+-- fail - can't add a sequence of the same schema to the schema publication
+ALTER PUBLICATION testpub_forschema ADD SEQUENCE pub_test.testpub_seq1;
+ERROR: cannot add relation "pub_test.testpub_seq1" to publication
+DETAIL: Sequence's schema "pub_test" is already part of the publication or part of the specified schema list.
+-- fail - can't drop a sequence from the schema publication which isn't in the
+-- publication
+ALTER PUBLICATION testpub_forschema DROP SEQUENCE pub_test.testpub_seq1;
+ERROR: relation "testpub_seq1" is not part of the publication
+-- should be able to set sequence to schema publication
+ALTER PUBLICATION testpub_forschema SET SEQUENCE pub_test.testpub_seq1;
+\dRp+ testpub_forschema
+ Publication testpub_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "pub_test.testpub_seq1"
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+ pubname | puballtables | puballsequences
+-------------------------+--------------+-----------------
+ testpub_forallsequences | f | t
+(1 row)
+
+\d+ pub_test.testpub_seq1
+ Sequence "pub_test.testpub_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_forallsequences"
+ "testpub_forschema"
+ "testpub_forsequence"
+
+\dRp+ testpub_forallsequences
+ Publication testpub_forallsequences
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | t | t | f | f | f | t | f
+(1 row)
+
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1, testpub_seq2;
+DROP PUBLICATION testpub_forallsequences, testpub_forsequence, testpub_forschema;
+-- Publication mixing tables and sequences
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_mix;
+RESET client_min_messages;
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE pub_test.testpub_seq2;
+ALTER PUBLICATION testpub_mix ADD SEQUENCE testpub_seq1, TABLE testpub_tbl1;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Sequences:
+ "public.testpub_seq1"
+
+ALTER PUBLICATION testpub_mix ADD ALL SEQUENCES IN SCHEMA pub_test, ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Tables from schemas:
+ "pub_test"
+Sequences:
+ "public.testpub_seq1"
+Sequences from schemas:
+ "pub_test"
+
+ALTER PUBLICATION testpub_mix DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Tables from schemas:
+ "pub_test"
+Sequences:
+ "public.testpub_seq1"
+
+ALTER PUBLICATION testpub_mix DROP ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Sequences:
+ "public.testpub_seq1"
+
+DROP PUBLICATION testpub_mix;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE pub_test.testpub_seq2;
+-- make sure we replicate only the correct relation type
+CREATE SCHEMA pub_test1;
+CREATE SEQUENCE pub_test1.test_seq1;
+CREATE TABLE pub_test1.test_tbl1 (a int primary key, b int);
+CREATE SCHEMA pub_test2;
+CREATE SEQUENCE pub_test2.test_seq2;
+CREATE TABLE pub_test2.test_tbl2 (a int primary key, b int);
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_schemas;
+RESET client_min_messages;
+-- add tables from one schema, sequences from the other
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test2"
+Sequences from schemas:
+ "pub_test1"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- add the other object type from each schema
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test1;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test2;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+ "pub_test2"
+Sequences from schemas:
+ "pub_test1"
+ "pub_test2"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+ "testpub_schemas" (tables)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+ "testpub_schemas" (sequences)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- now drop the object type added first
+ALTER PUBLICATION testpub_schemas DROP ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas DROP ALL SEQUENCES IN SCHEMA pub_test1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+Sequences from schemas:
+ "pub_test2"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+
+-- should fail (publication contains the whole schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test1.test_tbl1;
+ERROR: cannot add relation "pub_test1.test_tbl1" to publication
+DETAIL: Table's schema "pub_test1" is already part of the publication or part of the specified schema list.
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test2.test_seq2;
+ERROR: cannot add relation "pub_test2.test_seq2" to publication
+DETAIL: Sequence's schema "pub_test2" is already part of the publication or part of the specified schema list.
+-- should work (different schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test1.test_seq1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "pub_test2.test_tbl2"
+Tables from schemas:
+ "pub_test1"
+Sequences:
+ "pub_test1.test_seq1"
+Sequences from schemas:
+ "pub_test2"
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- now drop the explicitly added objects again
+ALTER PUBLICATION testpub_schemas DROP TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas DROP SEQUENCE pub_test1.test_seq1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+Sequences from schemas:
+ "pub_test2"
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+
+DROP PUBLICATION testpub_schemas;
+DROP TABLE pub_test1.test_tbl1, pub_test2.test_tbl2;
+DROP SEQUENCE pub_test1.test_seq1, pub_test2.test_seq2;
+DROP SCHEMA pub_test1, pub_test2;
-- Tests for partitioned tables
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_forparted;
@@ -207,10 +713,10 @@ UPDATE testpub_parted1 SET a = 1;
-- only parent is listed as being in publication, not the partition
ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
\dRp+ testpub_forparted
- Publication testpub_forparted
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forparted
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_parted"
@@ -223,10 +729,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
UPDATE testpub_parted1 SET a = 1;
ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
\dRp+ testpub_forparted
- Publication testpub_forparted
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | t
+ Publication testpub_forparted
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | t
Tables:
"public.testpub_parted"
@@ -255,10 +761,10 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -271,10 +777,10 @@ Tables:
ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -290,10 +796,10 @@ Publications:
ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -301,10 +807,10 @@ Tables:
-- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
@@ -337,10 +843,10 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub_syntax1
- Publication testpub_syntax1
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub_syntax1
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl3" WHERE (e < 999)
@@ -350,10 +856,10 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub_syntax2
- Publication testpub_syntax2
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub_syntax2
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -658,10 +1164,10 @@ ERROR: relation "testpub_tbl1" is already member of publication "testpub_fortbl
CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
ERROR: publication "testpub_fortbl" already exists
\dRp+ testpub_fortbl
- Publication testpub_fortbl
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortbl
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test.testpub_nopk"
"public.testpub_tbl1"
@@ -699,10 +1205,10 @@ Publications:
"testpub_fortbl"
\dRp+ testpub_default
- Publication testpub_default
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | f | f
+ Publication testpub_default
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | f | t | f
Tables:
"pub_test.testpub_nopk"
"public.testpub_tbl1"
@@ -780,10 +1286,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
DROP TABLE testpub_parted;
DROP TABLE testpub_tbl1;
\dRp+ testpub_default
- Publication testpub_default
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | f | f
+ Publication testpub_default
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | f | t | f
(1 row)
-- fail - must be owner of publication
@@ -793,20 +1299,20 @@ ERROR: must be owner of publication testpub_default
RESET ROLE;
ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
\dRp testpub_foo
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpub_foo | regress_publication_user | f | f | t | t | t | f | t | f
(1 row)
-- rename back to keep the rest simple
ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
\dRp testpub_default
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpub_default | regress_publication_user2 | f | f | t | t | t | f | t | f
(1 row)
-- adding schemas and tables
@@ -822,19 +1328,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub1_forschema FOR ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
CREATE PUBLICATION testpub2_forschema FOR ALL TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -848,44 +1354,44 @@ CREATE PUBLICATION testpub6_forschema FOR ALL TABLES IN SCHEMA "CURRENT_SCHEMA",
CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
RESET client_min_messages;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"public"
\dRp+ testpub4_forschema
- Publication testpub4_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub4_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
\dRp+ testpub5_forschema
- Publication testpub5_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub5_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
"public"
\dRp+ testpub6_forschema
- Publication testpub6_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub6_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
"public"
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"CURRENT_SCHEMA.CURRENT_SCHEMA"
@@ -919,10 +1425,10 @@ ERROR: schema "testpub_view" does not exist
-- dropping the schema should reflect the change in publication
DROP SCHEMA pub_test3;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -930,20 +1436,20 @@ Tables from schemas:
-- renaming the schema should reflect the change in publication
ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1_renamed"
"pub_test2"
ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -951,10 +1457,10 @@ Tables from schemas:
-- alter publication add schema
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -963,10 +1469,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -975,10 +1481,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA pub_test1;
ERROR: schema "pub_test1" is already member of publication "testpub1_forschema"
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -986,10 +1492,10 @@ Tables from schemas:
-- alter publication drop schema
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -997,10 +1503,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test2;
ERROR: tables from schema "pub_test2" are not part of the publication
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1008,29 +1514,29 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
-- drop all schemas
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
(1 row)
-- alter publication set multiple schema
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA pub_test1, pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1039,10 +1545,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1051,10 +1557,10 @@ Tables from schemas:
-- removing the duplicate schemas
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA pub_test1, pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1124,18 +1630,18 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub3_forschema;
RESET client_min_messages;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
(1 row)
ALTER PUBLICATION testpub3_forschema SET ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1145,20 +1651,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR ALL TABLES IN SCHEMA pub_test1
CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, ALL TABLES IN SCHEMA pub_test1;
RESET client_min_messages;
\dRp+ testpub_forschema_fortable
- Publication testpub_forschema_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forschema_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test2.tbl1"
Tables from schemas:
"pub_test1"
\dRp+ testpub_fortable_forschema
- Publication testpub_fortable_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test2.tbl1"
Tables from schemas:
@@ -1202,40 +1708,85 @@ CREATE SCHEMA sch1;
CREATE SCHEMA sch2;
CREATE TABLE sch1.tbl1 (a int) PARTITION BY RANGE(a);
CREATE TABLE sch2.tbl1_part1 PARTITION OF sch1.tbl1 FOR VALUES FROM (1) to (10);
+CREATE SEQUENCE sch1.seq1;
+CREATE SEQUENCE sch2.seq2;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+-----------
pub | sch1 | tbl1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
DROP PUBLICATION pub;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+(1 row)
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
@@ -1245,14 +1796,26 @@ SELECT * FROM pg_publication_tables;
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+(0 rows)
+
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
DROP TABLE sch2.tbl1_part1;
DROP TABLE sch1.tbl1;
@@ -1268,9 +1831,81 @@ SELECT * FROM pg_publication_tables;
pub | sch1 | tbl1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+(0 rows)
+
+DROP PUBLICATION pub;
+-- Schema publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR ALL SEQUENCES IN SCHEMA sch2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
+ALTER PUBLICATION pub DROP SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
RESET client_min_messages;
DROP PUBLICATION pub;
DROP TABLE sch1.tbl1;
+DROP SEQUENCE sch1.seq1, sch2.seq2;
DROP SCHEMA sch1 cascade;
DROP SCHEMA sch2 cascade;
RESET SESSION AUTHORIZATION;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 6cb6388880d..8a5b20e62cb 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1429,6 +1429,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gps.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/regress/sql/object_address.sql b/src/test/regress/sql/object_address.sql
index acd0468a9d9..9d8323468d6 100644
--- a/src/test/regress/sql/object_address.sql
+++ b/src/test/regress/sql/object_address.sql
@@ -49,6 +49,7 @@ CREATE TRANSFORM FOR int LANGUAGE SQL (
SET client_min_messages = 'ERROR';
CREATE PUBLICATION addr_pub FOR TABLE addr_nsp.gentable;
CREATE PUBLICATION addr_pub_schema FOR ALL TABLES IN SCHEMA addr_nsp;
+CREATE PUBLICATION addr_pub_schema2 FOR ALL SEQUENCES IN SCHEMA addr_nsp;
RESET client_min_messages;
CREATE SUBSCRIPTION regress_addr_sub CONNECTION '' PUBLICATION bar WITH (connect = false, slot_name = NONE);
CREATE STATISTICS addr_nsp.gentable_stat ON a, b FROM addr_nsp.gentable;
@@ -198,7 +199,8 @@ WITH objects (type, name, args) AS (VALUES
('transform', '{int}', '{sql}'),
('access method', '{btree}', '{}'),
('publication', '{addr_pub}', '{}'),
- ('publication namespace', '{addr_nsp}', '{addr_pub_schema}'),
+ ('publication namespace', '{addr_nsp}', '{addr_pub_schema,t}'),
+ ('publication namespace', '{addr_nsp}', '{addr_pub_schema2,s}'),
('publication relation', '{addr_nsp, gentable}', '{addr_pub}'),
('subscription', '{regress_addr_sub}', '{}'),
('statistics object', '{addr_nsp, gentable_stat}', '{}')
@@ -218,6 +220,7 @@ SELECT (pg_identify_object(addr1.classid, addr1.objid, addr1.objsubid)).*,
DROP FOREIGN DATA WRAPPER addr_fdw CASCADE;
DROP PUBLICATION addr_pub;
DROP PUBLICATION addr_pub_schema;
+DROP PUBLICATION addr_pub_schema2;
DROP SUBSCRIPTION regress_addr_sub;
DROP SCHEMA addr_nsp CASCADE;
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 5457c56b33f..c195e75c6f0 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -27,7 +27,7 @@ CREATE PUBLICATION testpub_xxx WITH (publish_via_partition_root = 'true', publis
\dRp
-ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
+ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete, sequence');
\dRp
@@ -46,6 +46,8 @@ ALTER PUBLICATION testpub_foralltables SET (publish = 'insert, update');
CREATE TABLE testpub_tbl2 (id serial primary key, data text);
-- fail - can't add to for all tables publication
ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
+-- fail - can't add a table using ADD SEQUENCE command
+ALTER PUBLICATION testpub_foralltables ADD SEQUENCE testpub_tbl2;
-- fail - can't drop from all tables publication
ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
-- fail - can't add to for all tables publication
@@ -104,6 +106,183 @@ RESET client_min_messages;
DROP TABLE testpub_tbl3, testpub_tbl3a;
DROP PUBLICATION testpub3, testpub4;
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES WITH (publish = 'sequence');
+RESET client_min_messages;
+ALTER PUBLICATION testpub_forallsequences SET (publish = 'insert, sequence');
+
+CREATE SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences ADD SEQUENCE testpub_seq2;
+-- fail - can't drop from all sequences publication
+ALTER PUBLICATION testpub_forallsequences DROP SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences SET SEQUENCE pub_test.testpub_seq1;
+
+-- fail - can't add schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences ADD ALL SEQUENCES IN SCHEMA pub_test;
+-- fail - can't drop schema from 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences DROP ALL SEQUENCES IN SCHEMA pub_test;
+-- fail - can't set schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences SET ALL SEQUENCES IN SCHEMA pub_test;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forsequence FOR SEQUENCE testpub_seq0;
+RESET client_min_messages;
+-- should be able to add schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence ADD ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+-- fail - can't add sequence from the schema we already added
+ALTER PUBLICATION testpub_forsequence ADD SEQUENCE pub_test.testpub_seq1;
+-- fail - can't add sequence using ADD TABLE command
+ALTER PUBLICATION testpub_forsequence ADD TABLE pub_test.testpub_seq1;
+-- should be able to drop schema from 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+-- should be able to set schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence SET ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forschema FOR ALL SEQUENCES IN SCHEMA pub_test;
+RESET client_min_messages;
+-- fail - can't create publication with schema and sequence of the same schema
+CREATE PUBLICATION testpub_for_seq_schema FOR ALL SEQUENCES IN SCHEMA pub_test, SEQUENCE pub_test.testpub_seq1;
+-- fail - can't add a sequence of the same schema to the schema publication
+ALTER PUBLICATION testpub_forschema ADD SEQUENCE pub_test.testpub_seq1;
+-- fail - can't drop a sequence from the schema publication which isn't in the
+-- publication
+ALTER PUBLICATION testpub_forschema DROP SEQUENCE pub_test.testpub_seq1;
+-- should be able to set sequence to schema publication
+ALTER PUBLICATION testpub_forschema SET SEQUENCE pub_test.testpub_seq1;
+\dRp+ testpub_forschema
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+\d+ pub_test.testpub_seq1
+\dRp+ testpub_forallsequences
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1, testpub_seq2;
+DROP PUBLICATION testpub_forallsequences, testpub_forsequence, testpub_forschema;
+
+-- Publication mixing tables and sequences
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_mix;
+RESET client_min_messages;
+
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE pub_test.testpub_seq2;
+
+ALTER PUBLICATION testpub_mix ADD SEQUENCE testpub_seq1, TABLE testpub_tbl1;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix ADD ALL SEQUENCES IN SCHEMA pub_test, ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix DROP ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+DROP PUBLICATION testpub_mix;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE pub_test.testpub_seq2;
+
+
+-- make sure we replicate only the correct relation type
+CREATE SCHEMA pub_test1;
+CREATE SEQUENCE pub_test1.test_seq1;
+CREATE TABLE pub_test1.test_tbl1 (a int primary key, b int);
+
+CREATE SCHEMA pub_test2;
+CREATE SEQUENCE pub_test2.test_seq2;
+CREATE TABLE pub_test2.test_tbl2 (a int primary key, b int);
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_schemas;
+RESET client_min_messages;
+
+-- add tables from one schema, sequences from the other
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test1;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- add the other object type from each schema
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test1;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test2;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- now drop the object type added first
+ALTER PUBLICATION testpub_schemas DROP ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas DROP ALL SEQUENCES IN SCHEMA pub_test1;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- should fail (publication contains the whole schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test1.test_tbl1;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test2.test_seq2;
+
+-- should work (different schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test1.test_seq1;
+
+\dRp+ testpub_schemas
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- now drop the explicitly added objects again
+ALTER PUBLICATION testpub_schemas DROP TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas DROP SEQUENCE pub_test1.test_seq1;
+
+\dRp+ testpub_schemas
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+DROP PUBLICATION testpub_schemas;
+DROP TABLE pub_test1.test_tbl1, pub_test2.test_tbl2;
+DROP SEQUENCE pub_test1.test_seq1, pub_test2.test_seq2;
+DROP SCHEMA pub_test1, pub_test2;
+
-- Tests for partitioned tables
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_forparted;
@@ -717,32 +896,51 @@ CREATE SCHEMA sch1;
CREATE SCHEMA sch2;
CREATE TABLE sch1.tbl1 (a int) PARTITION BY RANGE(a);
CREATE TABLE sch2.tbl1_part1 PARTITION OF sch1.tbl1 FOR VALUES FROM (1) to (10);
+CREATE SEQUENCE sch1.seq1;
+CREATE SEQUENCE sch2.seq2;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
DROP TABLE sch2.tbl1_part1;
@@ -755,10 +953,36 @@ CREATE TABLE sch1.tbl1_part3 (a int) PARTITION BY RANGE(a);
ALTER TABLE sch1.tbl1 ATTACH PARTITION sch1.tbl1_part3 FOR VALUES FROM (20) to (30);
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Schema publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR ALL SEQUENCES IN SCHEMA sch2;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub DROP SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
RESET client_min_messages;
DROP PUBLICATION pub;
DROP TABLE sch1.tbl1;
+DROP SEQUENCE sch1.seq1, sch2.seq2;
DROP SCHEMA sch1 cascade;
DROP SCHEMA sch2 cascade;
diff --git a/src/test/subscription/t/030_sequences.pl b/src/test/subscription/t/030_sequences.pl
new file mode 100644
index 00000000000..9ae3c03d7d1
--- /dev/null
+++ b/src/test/subscription/t/030_sequences.pl
@@ -0,0 +1,202 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'initial test data replicated');
+
+
+# advance the sequence in a rolled-back transaction - the rollback
+# does not wait for the replication, so we could see any intermediate state
+# so do something else after the test, to ensure we wait for everything
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+ INSERT INTO seq_test VALUES (-1);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'advance sequence in rolled-back transaction');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'create new sequence and roll it back');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'create sequence, advance it in rolled-back transaction, but commit the create');
+
+
+# advance the new sequence in a transaction, and roll it back - the rollback
+# does not wait for the replication, so we could see any intermediate state
+# so do something else after the test, to ensure we wait for everything
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+ INSERT INTO seq_test VALUES (-1);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '231|0|t',
+ 'advance the new sequence in a transaction and roll it back');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'advance sequence in a subtransaction');
+
+
+done_testing();
--
2.34.1
Awesome to see this get committed, thanks Tomas.
Is there anything left or shall I update the CF entry to committed?
Hi,
Pushed, after going through the patch once more, addressed the remaining
FIXMEs, corrected a couple places in the docs and comments, etc. Minor
tweaks, nothing important.
I've been thinking about the grammar a bit more after pushing, and I
realized that maybe it'd be better to handle the FOR ALL TABLES /
SEQUENCES clause as PublicationObjSpec, not as a separate/special case.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 3/24/22 22:52, Greg Stark wrote:
Awesome to see this get committed, thanks Tomas.
Is there anything left or shall I update the CF entry to committed?
Yeah, let's mark it as committed. I was waiting for some feedback from
the buildfarm - there are some failures, but it seems unrelated.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Fri, Mar 25, 2022 at 3:29 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
Pushed.
Some of the comments given by me [1]/messages/by-id/CAA4eK1Jn-DttQ=4Pdh9YCe1w+zGbgC+0uR0sfg2RtkjiAPmB9g@mail.gmail.com don't seem to be addressed or
responded to. Let me try to say again for the ease of discussion:
* Don't we need some syncing mechanism between apply worker and
sequence sync worker so that apply worker skips the sequence changes
till the sync worker is finished, otherwise, there is a risk of one
overriding the values of the other? See how we take care of this for a
table in should_apply_changes_for_rel() and its callers. If we don't
do this for sequences for some reason then probably a comment
somewhere is required.
* Don't we need explicit privilege checking before applying sequence
data as we do in commit a2ab9c06ea15fbcb2bfde570986a06b37f52bcca for
tables?
Few new comments:
=================
1. A simple test like the below crashes for me:
postgres=# create sequence s1;
CREATE SEQUENCE
postgres=# create sequence s2;
CREATE SEQUENCE
postgres=# create publication pub1 for sequence s1, s2;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
2. In apply_handle_sequence() do we need AccessExclusiveLock for
non-transactional case?
3. In apply_handle_sequence(), I think for transactional case, we need
to skip the operation, if the skip lsn is set. See how we skip in
apply_handle_insert() and similar functions.
[1]: /messages/by-id/CAA4eK1Jn-DttQ=4Pdh9YCe1w+zGbgC+0uR0sfg2RtkjiAPmB9g@mail.gmail.com
--
With Regards,
Amit Kapila.
On Fri, Mar 25, 2022 at 6:59 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
Hi,
Pushed, after going through the patch once more, addressed the remaining
FIXMEs, corrected a couple places in the docs and comments, etc. Minor
tweaks, nothing important.
The commit updates tab-completion for ALTER PUBLICATION but seems not
to update for CREATE PUBLICATION. I've attached a patch for that.
Also, the commit add a new pgoutput option "sequences":
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting
or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
But as far as I read changes, there is no use of this option, and this
code is not tested. Can we remove it or is it for upcoming changes?
Regards,
--
Masahiko Sawada
EDB: https://www.enterprisedb.com/
Attachments:
tab_completion.patchapplication/octet-stream; name=tab_completion.patchDownload
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index e59bd8302d..63bfdf11c6 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -2971,21 +2971,27 @@ psql_completion(const char *text, int start, int end)
/* CREATE PUBLICATION */
else if (Matches("CREATE", "PUBLICATION", MatchAny))
- COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL TABLES IN SCHEMA", "WITH (");
+ COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL TABLES IN SCHEMA",
+ "FOR SEQUENCE", "FOR ALL SEQUENCES", "FOR ALL SEQUENCES IN SCHEMA",
+ "WITH (");
else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
- COMPLETE_WITH("TABLE", "ALL TABLES", "ALL TABLES IN SCHEMA");
+ COMPLETE_WITH("TABLE", "ALL TABLES", "ALL TABLES IN SCHEMA",
+ "SEQUENCE", "ALL SEQUENCES", "ALL SEQUENCES IN SCHEMA");
else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
- COMPLETE_WITH("TABLES", "TABLES IN SCHEMA");
- else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+ COMPLETE_WITH("TABLES", "TABLES IN SCHEMA", "SEQUENCES", "SEQUENCES IN SCHEMA");
+ else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
COMPLETE_WITH("IN SCHEMA", "WITH (");
- else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLE", MatchAny) && !ends_with(prev_wd, ','))
+ else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLE|SEQUENCE", MatchAny) && !ends_with(prev_wd, ','))
COMPLETE_WITH("WHERE (", "WITH (");
/* Complete "CREATE PUBLICATION <name> FOR TABLE" with "<table>, ..." */
else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLE"))
COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);
+ /* Complete "CREATE PUBLICATION <name> FOR SEQUENCE" with "<sequence>, ..." */
+ else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "SEQUENCE"))
+ COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_sequences);
/*
- * "CREATE PUBLICATION <name> FOR TABLE <name> WHERE (" - complete with
+ * "CREATE PUBLICATION <name> FOR TABLE|SEQUENCE <name> WHERE (" - complete with
* table attributes
*/
else if (HeadMatches("CREATE", "PUBLICATION", MatchAny) && TailMatches("WHERE"))
@@ -2996,14 +3002,14 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH(" WITH (");
/*
- * Complete "CREATE PUBLICATION <name> FOR ALL TABLES IN SCHEMA <schema>,
+ * Complete "CREATE PUBLICATION <name> FOR ALL TABLES|SEQUENCES IN SCHEMA <schema>,
* ..."
*/
- else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES", "IN", "SCHEMA"))
+ else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY_PLUS(Query_for_list_of_schemas
" AND nspname NOT LIKE E'pg\\\\_%%'",
"CURRENT_SCHEMA");
- else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES", "IN", "SCHEMA", MatchAny) && (!ends_with(prev_wd, ',')))
+ else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES", "IN", "SCHEMA", MatchAny) && (!ends_with(prev_wd, ',')))
COMPLETE_WITH("WITH (");
/* Complete "CREATE PUBLICATION <name> [...] WITH" */
else if (HeadMatches("CREATE", "PUBLICATION") && TailMatches("WITH", "("))
On 3/25/22 05:01, Amit Kapila wrote:
On Fri, Mar 25, 2022 at 3:29 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:Pushed.
Some of the comments given by me [1] don't seem to be addressed or
responded to. Let me try to say again for the ease of discussion:
D'oh! I got distracted by Petr's response to that message, and missed
this part ...
* Don't we need some syncing mechanism between apply worker and
sequence sync worker so that apply worker skips the sequence changes
till the sync worker is finished, otherwise, there is a risk of one
overriding the values of the other? See how we take care of this for a
table in should_apply_changes_for_rel() and its callers. If we don't
do this for sequences for some reason then probably a comment
somewhere is required.
How would that happen? If we're effectively setting the sequence as a
side effect of inserting the data, then why should we even replicate the
sequence? We'll have the problem later too, no?
* Don't we need explicit privilege checking before applying sequence
data as we do in commit a2ab9c06ea15fbcb2bfde570986a06b37f52bcca for
tables?
So essentially something like TargetPrivilegesCheck in the worker? I
think you're probably right we need something like that.
Few new comments:
=================
1. A simple test like the below crashes for me:
postgres=# create sequence s1;
CREATE SEQUENCE
postgres=# create sequence s2;
CREATE SEQUENCE
postgres=# create publication pub1 for sequence s1, s2;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
Yeah, preprocess_pubobj_list seems to be a few bricks shy. I have a fix,
will push shortly.
2. In apply_handle_sequence() do we need AccessExclusiveLock for
non-transactional case?
Good catch. This lock was inherited from ResetSequence, but now that the
transactional case works differently, we probably don't need it.
3. In apply_handle_sequence(), I think for transactional case, we need
to skip the operation, if the skip lsn is set. See how we skip in
apply_handle_insert() and similar functions.
Right.
Thanks for these reports!
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 3/25/22 08:00, Masahiko Sawada wrote:
On Fri, Mar 25, 2022 at 6:59 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:Hi,
Pushed, after going through the patch once more, addressed the remaining
FIXMEs, corrected a couple places in the docs and comments, etc. Minor
tweaks, nothing important.The commit updates tab-completion for ALTER PUBLICATION but seems not
to update for CREATE PUBLICATION. I've attached a patch for that.
Thanks. I'm pretty sure the patch did that, but it likely got lost in
one of the rebases due to a conflict. Too bad we don't have tests for
tab-complete. Will fix.
Also, the commit add a new pgoutput option "sequences":
+ else if (strcmp(defel->defname, "sequences") == 0) + { + if (sequences_option_given) + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("conflicting or redundant options"))); + sequences_option_given = true; + + data->sequences = defGetBoolean(defel); + }But as far as I read changes, there is no use of this option, and this
code is not tested. Can we remove it or is it for upcoming changes?
pgoutput_sequence uses this
if (!data->sequences)
return;
This was inspired by what we do for logical messages, but maybe there's
an argument we don't need this, considering we have "sequence" action
and that a sequence has to be added to the publication. I don't think
there's any future patch relying on this (and it could add it back, if
needed).
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Fri, Mar 25, 2022 at 3:56 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
On 3/25/22 05:01, Amit Kapila wrote:
On Fri, Mar 25, 2022 at 3:29 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:Pushed.
Some of the comments given by me [1] don't seem to be addressed or
responded to. Let me try to say again for the ease of discussion:D'oh! I got distracted by Petr's response to that message, and missed
this part ...* Don't we need some syncing mechanism between apply worker and
sequence sync worker so that apply worker skips the sequence changes
till the sync worker is finished, otherwise, there is a risk of one
overriding the values of the other? See how we take care of this for a
table in should_apply_changes_for_rel() and its callers. If we don't
do this for sequences for some reason then probably a comment
somewhere is required.How would that happen? If we're effectively setting the sequence as a
side effect of inserting the data, then why should we even replicate the
sequence?
I was talking just about sequence values here, considering that some
sequence is just replicating based on nextval. I think the problem is
that apply worker might override what copy has done if an apply worker
is behind the sequence sync worker as both can run in parallel. Let me
try to take some theoretical example to explain this:
Assume, at LSN 10000, the value of sequence s1 is 10. Then by LSN
12000, the value of s1 becomes 20. Now, say copy decides to copy the
sequence value till LSN 12000 which means it will make the value as 20
on the subscriber, now, in parallel, apply worker can process LSN
10000 and make it again 10. Apply worker might end up redoing all
sequence operations along with some transactional ones where we
recreate the file. I am not sure what exact problem it can lead to but
I think we don't need to redo the work.
We'll have the problem later too, no?
* Don't we need explicit privilege checking before applying sequence
data as we do in commit a2ab9c06ea15fbcb2bfde570986a06b37f52bcca for
tables?So essentially something like TargetPrivilegesCheck in the worker?
Right.
Few more comments:
==================
1.
@@ -636,7 +704,7 @@ CreatePublication(ParseState *pstate,
CreatePublicationStmt *stmt)
get_database_name(MyDatabaseId));
/* FOR ALL TABLES requires superuser */
- if (stmt->for_all_tables && !superuser())
+ if (for_all_tables && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES
publication")));
Don't we need a similar check for 'for_all_schema' publications?
2.
+<varlistentry>
+<term>
+ Int8
+</term>
+<listitem>
+<para>
+ 1 if the sequence update is transactions, 0 otherwise.
Shall we say transactional instead of transactions?
3.
+/*
+ * Determine object type given the object type set for a schema.
+ */
+char
+pub_get_object_type_for_relkind(char relkind)
Shouldn't it be 'relation' instead of 'schema' at the end of the sentence?
4.
@@ -1739,13 +1804,13 @@ get_rel_sync_entry(PGOutputData *data,
Relation relation)
{
Oid schemaId = get_rel_namespace(relid);
List *pubids = GetRelationPublications(relid);
-
+ char objectType =
pub_get_object_type_for_relkind(get_rel_relkind(relid));
A few lines after this we are again getting relkind which is not a big
deal but OTOH there doesn't seem to be a need to fetch the same thing
twice from the cache.
5.
+
+ /* Check that user is allowed to manipulate the publication tables. */
+ if (sequences && pubform->puballsequences)
/tables/sequences
6.
+apply_handle_sequence(StringInfo s)
{
...
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
...
}
As here, we are using missing_ok, if the sequence doesn't exist, it
will give a message like: "ERROR: relation "public.s1" does not
exist" whereas for tables we give a slightly more clear message like:
"ERROR: logical replication target relation "public.t1" does not
exist". This is handled via logicalrep_rel_open().
--
With Regards,
Amit Kapila.
On 3/25/22 12:21, Amit Kapila wrote:
On Fri, Mar 25, 2022 at 3:56 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 3/25/22 05:01, Amit Kapila wrote:
On Fri, Mar 25, 2022 at 3:29 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:Pushed.
Some of the comments given by me [1] don't seem to be addressed or
responded to. Let me try to say again for the ease of discussion:D'oh! I got distracted by Petr's response to that message, and missed
this part ...* Don't we need some syncing mechanism between apply worker and
sequence sync worker so that apply worker skips the sequence changes
till the sync worker is finished, otherwise, there is a risk of one
overriding the values of the other? See how we take care of this for a
table in should_apply_changes_for_rel() and its callers. If we don't
do this for sequences for some reason then probably a comment
somewhere is required.How would that happen? If we're effectively setting the sequence as a
side effect of inserting the data, then why should we even replicate the
sequence?I was talking just about sequence values here, considering that some
sequence is just replicating based on nextval. I think the problem is
that apply worker might override what copy has done if an apply worker
is behind the sequence sync worker as both can run in parallel. Let me
try to take some theoretical example to explain this:Assume, at LSN 10000, the value of sequence s1 is 10. Then by LSN
12000, the value of s1 becomes 20. Now, say copy decides to copy the
sequence value till LSN 12000 which means it will make the value as 20
on the subscriber, now, in parallel, apply worker can process LSN
10000 and make it again 10. Apply worker might end up redoing all
sequence operations along with some transactional ones where we
recreate the file. I am not sure what exact problem it can lead to but
I think we don't need to redo the work.We'll have the problem later too, no?
Ah, I was confused why this would be an issue for sequences and not for
plain tables, but now I realize apply_handle_sequence() is not called in
apply_handle_sequence. Yes, that's probably a thinko.
* Don't we need explicit privilege checking before applying sequence
data as we do in commit a2ab9c06ea15fbcb2bfde570986a06b37f52bcca for
tables?So essentially something like TargetPrivilegesCheck in the worker?
Right.
OK, will do.
Few more comments:
==================
1.
@@ -636,7 +704,7 @@ CreatePublication(ParseState *pstate,
CreatePublicationStmt *stmt)
get_database_name(MyDatabaseId));/* FOR ALL TABLES requires superuser */ - if (stmt->for_all_tables && !superuser()) + if (for_all_tables && !superuser()) ereport(ERROR, (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), errmsg("must be superuser to create FOR ALL TABLES publication")));Don't we need a similar check for 'for_all_schema' publications?
I think you mean "for_all_sequences", right?
2. +<varlistentry> +<term> + Int8 +</term> +<listitem> +<para> + 1 if the sequence update is transactions, 0 otherwise.Shall we say transactional instead of transactions?
3. +/* + * Determine object type given the object type set for a schema. + */ +char +pub_get_object_type_for_relkind(char relkind)Shouldn't it be 'relation' instead of 'schema' at the end of the sentence?
4.
@@ -1739,13 +1804,13 @@ get_rel_sync_entry(PGOutputData *data,
Relation relation)
{
Oid schemaId = get_rel_namespace(relid);
List *pubids = GetRelationPublications(relid);
-
+ char objectType =
pub_get_object_type_for_relkind(get_rel_relkind(relid));A few lines after this we are again getting relkind which is not a big
deal but OTOH there doesn't seem to be a need to fetch the same thing
twice from the cache.5. + + /* Check that user is allowed to manipulate the publication tables. */ + if (sequences && pubform->puballsequences)/tables/sequences
6. +apply_handle_sequence(StringInfo s) { ... + + relid = RangeVarGetRelid(makeRangeVar(seq.nspname, + seq.seqname, -1), + RowExclusiveLock, false); ... }As here, we are using missing_ok, if the sequence doesn't exist, it
will give a message like: "ERROR: relation "public.s1" does not
exist" whereas for tables we give a slightly more clear message like:
"ERROR: logical replication target relation "public.t1" does not
exist". This is handled via logicalrep_rel_open().
Thanks, I'll look at rewording these comments and messages.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Fri, Mar 25, 2022 at 3:29 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
Hi,
Pushed, after going through the patch once more, addressed the remaining
FIXMEs, corrected a couple places in the docs and comments, etc. Minor
tweaks, nothing important.
While rebasing patch [1] I found a couple of comments:
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas,
+ List **schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -185,12 +194,23 @@ ObjectsInPublicationToOids(List
*pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLES_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ *schemas = list_append_unique_oid(*schemas, schemaid);
+ break;
Now tables_schemas and sequence_schemas are being updated and used in
ObjectsInPublicationToOids, schema parameter is no longer being used
after processing in ObjectsInPublicationToOids, I felt we can remove
that parameter.
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA",
"TABLE", "SEQUENCE");
Tab completion of alter publication for ADD and DROP is the same, we
could combine it.
Attached a patch for the same.
Thoughts?
[1]: /messages/by-id/CALDaNm3=JrucjhiiwsYQw5-PGtBHFONa6F7hhWCXMsGvh=tamA@mail.gmail.com
Regards,
Vignesh
Attachments:
0001-Removed-unused-parameter-from-ObjectsInPublicationTo.patchtext/x-patch; charset=US-ASCII; name=0001-Removed-unused-parameter-from-ObjectsInPublicationTo.patchDownload
From dc707ba93494e3ba0cffd8ab6e1fb1b7a1c0e70e Mon Sep 17 00:00:00 2001
From: Vigneshwaran C <vignesh21@gmail.com>
Date: Fri, 25 Mar 2022 19:49:40 +0530
Subject: [PATCH] Removed unused parameter from ObjectsInPublicationToOids.
Removed unused parameter from ObjectsInPublicationToOids.
---
src/backend/commands/publicationcmds.c | 15 +++------------
src/bin/psql/tab-complete.c | 5 +----
2 files changed, 4 insertions(+), 16 deletions(-)
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index c6437799c5..89298d7857 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -175,8 +175,7 @@ parse_publication_options(ParseState *pstate,
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
List **tables, List **sequences,
- List **tables_schemas, List **sequences_schemas,
- List **schemas)
+ List **tables_schemas, List **sequences_schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -204,14 +203,12 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
/* Filter out duplicates if user specifies "sch1, sch1" */
*tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
- *schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
*sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
- *schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA:
search_path = fetch_search_path(false);
@@ -225,7 +222,6 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
/* Filter out duplicates if user specifies "sch1, sch1" */
*tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
- *schemas = list_append_unique_oid(*schemas, schemaid);
break;
case PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA:
search_path = fetch_search_path(false);
@@ -239,7 +235,6 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
/* Filter out duplicates if user specifies "sch1, sch1" */
*sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
- *schemas = list_append_unique_oid(*schemas, schemaid);
break;
default:
/* shouldn't happen */
@@ -679,7 +674,6 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
List *sequences = NIL;
List *tables_schemaidlist = NIL;
List *sequences_schemaidlist = NIL;
- List *schemaidlist = NIL;
bool for_all_tables = false;
bool for_all_sequences = false;
@@ -782,8 +776,7 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
ObjectsInPublicationToOids(stmt->pubobjects, pstate,
&tables, &sequences,
&tables_schemaidlist,
- &sequences_schemaidlist,
- &schemaidlist);
+ &sequences_schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
if (list_length(tables_schemaidlist) > 0 && !superuser())
@@ -1462,14 +1455,12 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
List *sequences = NIL;
List *tables_schemaidlist = NIL;
List *sequences_schemaidlist = NIL;
- List *schemaidlist = NIL;
Oid pubid = pubform->oid;
ObjectsInPublicationToOids(stmt->pubobjects, pstate,
&tables, &sequences,
&tables_schemaidlist,
- &sequences_schemaidlist,
- &schemaidlist);
+ &sequences_schemaidlist);
CheckAlterPublication(stmt, tup,
tables, tables_schemaidlist,
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 63bfdf11c6..682d4fe18d 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1814,7 +1814,7 @@ psql_completion(const char *text, int start, int end)
else if (Matches("ALTER", "PUBLICATION", MatchAny))
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
- else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP"))
COMPLETE_WITH("ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") ||
(HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") &&
@@ -1840,9 +1840,6 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH(",", "WHERE (");
else if (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE"))
COMPLETE_WITH(",");
- /* ALTER PUBLICATION <name> DROP */
- else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
--
2.32.0
On 3/25/22 12:59, Tomas Vondra wrote:
On 3/25/22 12:21, Amit Kapila wrote:
On Fri, Mar 25, 2022 at 3:56 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 3/25/22 05:01, Amit Kapila wrote:
On Fri, Mar 25, 2022 at 3:29 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:Pushed.
Some of the comments given by me [1] don't seem to be addressed or
responded to. Let me try to say again for the ease of discussion:D'oh! I got distracted by Petr's response to that message, and missed
this part ...* Don't we need some syncing mechanism between apply worker and
sequence sync worker so that apply worker skips the sequence changes
till the sync worker is finished, otherwise, there is a risk of one
overriding the values of the other? See how we take care of this for a
table in should_apply_changes_for_rel() and its callers. If we don't
do this for sequences for some reason then probably a comment
somewhere is required.How would that happen? If we're effectively setting the sequence as a
side effect of inserting the data, then why should we even replicate the
sequence?I was talking just about sequence values here, considering that some
sequence is just replicating based on nextval. I think the problem is
that apply worker might override what copy has done if an apply worker
is behind the sequence sync worker as both can run in parallel. Let me
try to take some theoretical example to explain this:Assume, at LSN 10000, the value of sequence s1 is 10. Then by LSN
12000, the value of s1 becomes 20. Now, say copy decides to copy the
sequence value till LSN 12000 which means it will make the value as 20
on the subscriber, now, in parallel, apply worker can process LSN
10000 and make it again 10. Apply worker might end up redoing all
sequence operations along with some transactional ones where we
recreate the file. I am not sure what exact problem it can lead to but
I think we don't need to redo the work.We'll have the problem later too, no?
Ah, I was confused why this would be an issue for sequences and not for
plain tables, but now I realize apply_handle_sequence() is not called in
apply_handle_sequence. Yes, that's probably a thinko.
Hmm, so fixing this might be a bit trickier than I expected.
Firstly, currently we only send nspname/relname in the sequence message,
not the remote OID or schema. The idea was that for sequences we don't
really need schema info, so this seemed OK.
But should_apply_changes_for_rel() needs LogicalRepRelMapEntry, and to
create/maintain that those records we need to send the schema.
Attached is a WIP patch does that.
Two places need more work, I think:
1) maybe_send_schema needs ReorderBufferChange, but we don't have that
for sequences, we only have TXN. I created a simple wrapper, but maybe
we should just tweak maybe_send_schema to use TXN.
2) The transaction handling in is a bit confusing. The non-transactional
increments won't have any explicit commit later, so we can't just rely
on begin_replication_step/end_replication_step. But I want to try
spending a bit more time on this.
But there's a more serious issue, I think. So far, we allowed this:
BEGIN;
CREATE SEQUENCE s2;
ALTER PUBLICATION p ADD SEQUENCE s2;
INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
COMMIT;
and the behavior was that we replicated the changes. But with the patch
applied, that no longer happens, because should_apply_changes_for_rel
says the change should not be applied.
And after thinking about this, I think that's correct - we can't apply
changes until ALTER SUBSCRIPTION ... REFRESH PUBLICATION gets executed,
and we can't do that until the transaction commits.
So I guess that's correct, and the current behavior is a bug.
For a while I was thinking that maybe this means we don't need the
transactional behavior at all, but I think we do - we have to handle
ALTER SEQUENCE cases that are transactional.
Does that make sense, Amit?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
sequences-tablesync-fix.patchtext/x-patch; charset=UTF-8; name=sequences-tablesync-fix.patchDownload
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 3dbe85d61a..0ae0378191 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -657,7 +657,6 @@ logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
int64 last_value, int64 log_cnt, bool is_called)
{
uint8 flags = 0;
- char *relname;
pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
@@ -668,9 +667,8 @@ logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
pq_sendint8(out, flags);
pq_sendint64(out, lsn);
- logicalrep_write_namespace(out, RelationGetNamespace(rel));
- relname = RelationGetRelationName(rel);
- pq_sendstring(out, relname);
+ /* OID ad sequence identifier */
+ pq_sendint32(out, RelationGetRelid(rel));
pq_sendint8(out, transactional);
pq_sendint64(out, last_value);
@@ -688,9 +686,8 @@ logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
pq_getmsgint(in, 1);
pq_getmsgint64(in);
- /* Read relation name from stream */
- seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
- seqdata->seqname = pstrdup(pq_getmsgstring(in));
+ /* Read relation OID from stream */
+ seqdata->remoteid = pq_getmsgint(in, 4);
seqdata->transactional = pq_getmsgint(in, 1);
seqdata->last_value = pq_getmsgint64(in);
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index f3868b3e1f..d2acb08996 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -1151,7 +1151,8 @@ static void
apply_handle_sequence(StringInfo s)
{
LogicalRepSequence seq;
- Oid relid;
+ LogicalRepRelMapEntry *rel;
+ LOCKMODE lockmode;
if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
return;
@@ -1167,9 +1168,8 @@ apply_handle_sequence(StringInfo s)
Assert(!(!seq.transactional && IsTransactionState()));
/*
- * Make sure we're in a transaction (needed by SetSequence). For
- * non-transactional updates we're guaranteed to start a new one,
- * and we'll commit it at the end.
+ * Make sure there's a transaction (the non-transactional case may can't
+ * have one yet).
*/
if (!IsTransactionState())
{
@@ -1177,15 +1177,38 @@ apply_handle_sequence(StringInfo s)
maybe_reread_subscription();
}
- relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
- seq.seqname, -1),
- RowExclusiveLock, false);
+ /*
+ * Use the necessary lock mode. For transactional changes we need
+ * AccessExclusiveLock (just like ResetSequence), while for regular
+ * updates RowExclusiveLock is enough).
+ */
+ lockmode = (seq.transactional ? AccessExclusiveLock : RowExclusiveLock);
- /* lock the sequence in AccessExclusiveLock, as expected by SetSequence */
- LockRelationOid(relid, AccessExclusiveLock);
+ rel = logicalrep_rel_open(seq.remoteid, lockmode);
+
+ if (!should_apply_changes_for_rel(rel))
+ {
+ /*
+ * The relation can't become interesting in the middle of the
+ * transaction so it's safe to unlock it.
+ */
+ logicalrep_rel_close(rel, RowExclusiveLock);
+
+ /*
+ * In non-transactional case there won't be any other changes, so
+ * abort the whole transaction.
+ */
+ if (!seq.transactional)
+ AbortCurrentTransaction();
+
+ return;
+ }
/* apply the sequence change */
- SetSequence(relid, seq.transactional, seq.last_value, seq.log_cnt, seq.is_called);
+ SetSequence(RelationGetRelid(rel->localrel),
+ seq.transactional, seq.last_value, seq.log_cnt, seq.is_called);
+
+ logicalrep_rel_close(rel, NoLock);
/*
* Commit the per-stream transaction (we only do this when not in
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 4cdc698cbb..366faf9564 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -568,9 +568,9 @@ pgoutput_rollback_prepared_txn(LogicalDecodingContext *ctx,
* done yet.
*/
static void
-maybe_send_schema(LogicalDecodingContext *ctx,
- ReorderBufferChange *change,
- Relation relation, RelationSyncEntry *relentry)
+maybe_send_schema_txn(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ Relation relation, RelationSyncEntry *relentry)
{
bool schema_sent;
TransactionId xid = InvalidTransactionId;
@@ -585,10 +585,10 @@ maybe_send_schema(LogicalDecodingContext *ctx,
* the write methods will not include it.
*/
if (in_streaming)
- xid = change->txn->xid;
+ xid = txn->xid;
- if (change->txn->toptxn)
- topxid = change->txn->toptxn->xid;
+ if (txn->toptxn)
+ topxid = txn->toptxn->xid;
else
topxid = xid;
@@ -634,6 +634,14 @@ maybe_send_schema(LogicalDecodingContext *ctx,
relentry->schema_sent = true;
}
+static void
+maybe_send_schema(LogicalDecodingContext *ctx,
+ ReorderBufferChange *change,
+ Relation relation, RelationSyncEntry *relentry)
+{
+ maybe_send_schema_txn(ctx, change->txn, relation, relentry);
+}
+
/*
* Sends a relation
*/
@@ -1492,6 +1500,8 @@ pgoutput_sequence(LogicalDecodingContext *ctx,
if (!relentry->pubactions.pubsequence)
return;
+ maybe_send_schema_txn(ctx, txn, relation, relentry);
+
OutputPluginPrepareWrite(ctx, true);
logicalrep_write_sequence(ctx->out,
relation,
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index fb86ca022d..b7ab68d9a2 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -122,9 +122,10 @@ typedef struct LogicalRepTyp
/* Sequence info */
typedef struct LogicalRepSequence
{
- Oid remoteid; /* unique id of the remote sequence */
- char *nspname; /* schema name of remote sequence */
- char *seqname; /* name of the remote sequence */
+ // Oid remoteid; /* unique id of the remote sequence */
+ LogicalRepRelId remoteid; /* unique id of the sequence */
+ // char *nspname; /* schema name of remote sequence */
+ // char *seqname; /* name of the remote sequence */
bool transactional;
int64 last_value;
int64 log_cnt;
On 3/25/22 15:34, vignesh C wrote:
On Fri, Mar 25, 2022 at 3:29 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:Hi,
Pushed, after going through the patch once more, addressed the remaining
FIXMEs, corrected a couple places in the docs and comments, etc. Minor
tweaks, nothing important.While rebasing patch [1] I found a couple of comments: static void ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate, - List **rels, List **schemas) + List **tables, List **sequences, + List **tables_schemas, List **sequences_schemas, + List **schemas) { ListCell *cell; PublicationObjSpec *pubobj; @@ -185,12 +194,23 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate, switch (pubobj->pubobjtype) { case PUBLICATIONOBJ_TABLE: - *rels = lappend(*rels, pubobj->pubtable); + *tables = lappend(*tables, pubobj->pubtable); + break; + case PUBLICATIONOBJ_SEQUENCE: + *sequences = lappend(*sequences, pubobj->pubtable); break; case PUBLICATIONOBJ_TABLES_IN_SCHEMA: schemaid = get_namespace_oid(pubobj->name, false);/* Filter out duplicates if user specifies "sch1, sch1" */ + *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid); + *schemas = list_append_unique_oid(*schemas, schemaid); + break;Now tables_schemas and sequence_schemas are being updated and used in
ObjectsInPublicationToOids, schema parameter is no longer being used
after processing in ObjectsInPublicationToOids, I felt we can remove
that parameter.
Thanks! That's a nice simplification, I'll get that pushed in a couple
minutes.
/* ALTER PUBLICATION <name> ADD */ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD")) - COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE"); + COMPLETE_WITH("ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");Tab completion of alter publication for ADD and DROP is the same, we
could combine it.
We could, but I find these combined rules harder to read, so I'll keep
the current tab-completion.
Attached a patch for the same.
Thoughts?
Thanks for taking a look! Appreciated.
[1] - /messages/by-id/CALDaNm3=JrucjhiiwsYQw5-PGtBHFONa6F7hhWCXMsGvh=tamA@mail.gmail.com
regars
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Hi,
I've fixed most of the reported issues (or at least I think so), with
the exception of those in apply_handle_sequence function, i.e.:
1) properly coordinating with the tablesync worker
2) considering skip_lsn, skipping changes
3) missing privilege check, similar to TargetPrivilegesCheck
4) nicer error message if the sequence does not exist
The apply_handle_sequence stuff seems to be inter-related, so I plan to
deal with that in a single separate commit - the main part being the
tablesync coordination, per the fix I shared earlier today. But I need
time to think about that, I don't want to rush that.
Thanks for the feedback!
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Fri, Mar 25, 2022 at 10:20 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
Hmm, so fixing this might be a bit trickier than I expected.
Firstly, currently we only send nspname/relname in the sequence message,
not the remote OID or schema. The idea was that for sequences we don't
really need schema info, so this seemed OK.But should_apply_changes_for_rel() needs LogicalRepRelMapEntry, and to
create/maintain that those records we need to send the schema.Attached is a WIP patch does that.
Two places need more work, I think:
1) maybe_send_schema needs ReorderBufferChange, but we don't have that
for sequences, we only have TXN. I created a simple wrapper, but maybe
we should just tweak maybe_send_schema to use TXN.2) The transaction handling in is a bit confusing. The non-transactional
increments won't have any explicit commit later, so we can't just rely
on begin_replication_step/end_replication_step. But I want to try
spending a bit more time on this.
I didn't understand what you want to say in point (2).
But there's a more serious issue, I think. So far, we allowed this:
BEGIN;
CREATE SEQUENCE s2;
ALTER PUBLICATION p ADD SEQUENCE s2;
INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
COMMIT;and the behavior was that we replicated the changes. But with the patch
applied, that no longer happens, because should_apply_changes_for_rel
says the change should not be applied.And after thinking about this, I think that's correct - we can't apply
changes until ALTER SUBSCRIPTION ... REFRESH PUBLICATION gets executed,
and we can't do that until the transaction commits.So I guess that's correct, and the current behavior is a bug.
Yes, I also think that is a bug.
For a while I was thinking that maybe this means we don't need the
transactional behavior at all, but I think we do - we have to handle
ALTER SEQUENCE cases that are transactional.
I need some time to think about this. At all places, it is mentioned
as creating a sequence for transactional cases which at the very least
need some tweak.
--
With Regards,
Amit Kapila.
On 3/26/22 08:28, Amit Kapila wrote:
On Fri, Mar 25, 2022 at 10:20 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:Hmm, so fixing this might be a bit trickier than I expected.
Firstly, currently we only send nspname/relname in the sequence message,
not the remote OID or schema. The idea was that for sequences we don't
really need schema info, so this seemed OK.But should_apply_changes_for_rel() needs LogicalRepRelMapEntry, and to
create/maintain that those records we need to send the schema.Attached is a WIP patch does that.
Two places need more work, I think:
1) maybe_send_schema needs ReorderBufferChange, but we don't have that
for sequences, we only have TXN. I created a simple wrapper, but maybe
we should just tweak maybe_send_schema to use TXN.2) The transaction handling in is a bit confusing. The non-transactional
increments won't have any explicit commit later, so we can't just rely
on begin_replication_step/end_replication_step. But I want to try
spending a bit more time on this.I didn't understand what you want to say in point (2).
My point is that handle_apply_sequence() either needs to use the same
transaction handling as other apply methods, or start (and commit) a
separate transaction for the "transactional" case.
Which means we can't use the begin_replication_step/end_replication_step
and the current code seems a bit complex. And I'm not sure it's quite
correct. So this place needs more work.
But there's a more serious issue, I think. So far, we allowed this:
BEGIN;
CREATE SEQUENCE s2;
ALTER PUBLICATION p ADD SEQUENCE s2;
INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
COMMIT;and the behavior was that we replicated the changes. But with the patch
applied, that no longer happens, because should_apply_changes_for_rel
says the change should not be applied.And after thinking about this, I think that's correct - we can't apply
changes until ALTER SUBSCRIPTION ... REFRESH PUBLICATION gets executed,
and we can't do that until the transaction commits.So I guess that's correct, and the current behavior is a bug.
Yes, I also think that is a bug.
OK
For a while I was thinking that maybe this means we don't need the
transactional behavior at all, but I think we do - we have to handle
ALTER SEQUENCE cases that are transactional.I need some time to think about this.
Understood.
At all places, it is mentioned
as creating a sequence for transactional cases which at the very least
need some tweak.
Which places?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Sat, Mar 26, 2022 at 3:26 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
On 3/26/22 08:28, Amit Kapila wrote:
2) The transaction handling in is a bit confusing. The non-transactional
increments won't have any explicit commit later, so we can't just rely
on begin_replication_step/end_replication_step. But I want to try
spending a bit more time on this.I didn't understand what you want to say in point (2).
My point is that handle_apply_sequence() either needs to use the same
transaction handling as other apply methods, or start (and commit) a
separate transaction for the "transactional" case.Which means we can't use the begin_replication_step/end_replication_step
We already call CommitTransactionCommand after end_replication_step at
a few places in that file so as there is no explicit commit in
non-transactional case, we can probably call CommitTransactionCommand
for it.
and the current code seems a bit complex. And I'm not sure it's quite
correct. So this place needs more work.
Agreed.
For a while I was thinking that maybe this means we don't need the
transactional behavior at all, but I think we do - we have to handle
ALTER SEQUENCE cases that are transactional.I need some time to think about this.
While thinking about this, I think I see a problem with the
non-transactional handling of sequences. It seems that we will skip
sending non-transactional sequence change if it occurs before the
decoding has reached a consistent point but the surrounding commit
occurs after a consistent point is reached. In such cases, the
corresponding DMLs like inserts will be sent but sequence changes
won't be sent. For example (this scenario is based on
twophase_snapshot.spec),
Initial setup:
==============
Create table t1_seq(c1 int);
Create Sequence seq1;
Test Execution via multiple sessions (this test allows insert in
session-2 to happen before we reach a consistent point and commit
happens after a consistent point):
=======================================================================================================
Session-2:
Begin;
SELECT pg_current_xact_id();
Session-1:
SELECT 'init' FROM pg_create_logical_replication_slot('test_slot',
'test_decoding', false, true);
Session-3:
Begin;
SELECT pg_current_xact_id();
Session-2:
Commit;
Begin;
INSERT INTO t1_seq SELECT nextval('seq1') FROM generate_series(1,100);
Session-3:
Commit;
Session-2:
Commit 'foo'
Session-1:
SELECT data FROM pg_logical_slot_get_changes('test_slot', NULL, NULL,
'include-xids', 'false', 'skip-empty-xacts', '1');
data
----------------------------------------------
BEGIN
table public.t1_seq: INSERT: c1[integer]:1
table public.t1_seq: INSERT: c1[integer]:2
table public.t1_seq: INSERT: c1[integer]:3
table public.t1_seq: INSERT: c1[integer]:4
table public.t1_seq: INSERT: c1[integer]:5
table public.t1_seq: INSERT: c1[integer]:6
Now, if we normally try to decode such an insert, the result would be
something like:
data
------------------------------------------------------------------------------
sequence public.seq1: transactional:0 last_value: 33 log_cnt: 0 is_called:1
sequence public.seq1: transactional:0 last_value: 66 log_cnt: 0 is_called:1
sequence public.seq1: transactional:0 last_value: 99 log_cnt: 0 is_called:1
sequence public.seq1: transactional:0 last_value: 132 log_cnt: 0 is_called:1
BEGIN
table public.t1_seq: INSERT: c1[integer]:1
table public.t1_seq: INSERT: c1[integer]:2
table public.t1_seq: INSERT: c1[integer]:3
table public.t1_seq: INSERT: c1[integer]:4
table public.t1_seq: INSERT: c1[integer]:5
table public.t1_seq: INSERT: c1[integer]:6
This will create an inconsistent replica as sequence changes won't be
replicated. I thought about changing snapshot dealing of
non-transactional sequence changes similar to transactional ones but
that also won't work because it is only at commit we decide whether we
can send the changes.
For the transactional case, as we are considering the create sequence
operation as transactional, we would unnecessarily queue them even
though that is not required. Basically, they don't need to be
considered transactional and we can simply ignore such messages like
other DDLs. But for that probably we need to distinguish Alter/Create
case which may or may not be straightforward. Now, queuing them is
probably harmless unless it causes the transaction to spill/stream.
I still couldn't think completely about cases where a mix of
transactional and non-transactional changes occur in the same
transaction as I think it somewhat depends on what we want to do about
the above cases.
At all places, it is mentioned
as creating a sequence for transactional cases which at the very least
need some tweak.Which places?
In comments like:
a. When decoding sequences, we differentiate between sequences created
in a (running) transaction and sequences created in other (already
committed) transactions.
b. ... But for new sequences, we need to handle them in a transactional way, ..
c. ... Change needs to be handled as transactional, because the
sequence was created in a transaction that is still running ...
It seems all these places indicate a scenario of creating a sequence
whereas we want to do transactional stuff mainly for Alter.
--
With Regards,
Amit Kapila.
On Sat, Mar 26, 2022 at 6:56 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
On 3/26/22 08:28, Amit Kapila wrote:
On Fri, Mar 25, 2022 at 10:20 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:Hmm, so fixing this might be a bit trickier than I expected.
Firstly, currently we only send nspname/relname in the sequence message,
not the remote OID or schema. The idea was that for sequences we don't
really need schema info, so this seemed OK.But should_apply_changes_for_rel() needs LogicalRepRelMapEntry, and to
create/maintain that those records we need to send the schema.Attached is a WIP patch does that.
Two places need more work, I think:
1) maybe_send_schema needs ReorderBufferChange, but we don't have that
for sequences, we only have TXN. I created a simple wrapper, but maybe
we should just tweak maybe_send_schema to use TXN.2) The transaction handling in is a bit confusing. The non-transactional
increments won't have any explicit commit later, so we can't just rely
on begin_replication_step/end_replication_step. But I want to try
spending a bit more time on this.I didn't understand what you want to say in point (2).
My point is that handle_apply_sequence() either needs to use the same
transaction handling as other apply methods, or start (and commit) a
separate transaction for the "transactional" case.Which means we can't use the begin_replication_step/end_replication_step
and the current code seems a bit complex. And I'm not sure it's quite
correct. So this place needs more work.But there's a more serious issue, I think. So far, we allowed this:
BEGIN;
CREATE SEQUENCE s2;
ALTER PUBLICATION p ADD SEQUENCE s2;
INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
COMMIT;and the behavior was that we replicated the changes. But with the patch
applied, that no longer happens, because should_apply_changes_for_rel
says the change should not be applied.And after thinking about this, I think that's correct - we can't apply
changes until ALTER SUBSCRIPTION ... REFRESH PUBLICATION gets executed,
and we can't do that until the transaction commits.So I guess that's correct, and the current behavior is a bug.
Yes, I also think that is a bug.
OK
I also think that this is a bug. Given this behavior is a bug and
newly-added sequence data should be replicated only after ALTER
SUBSCRIPTION ... REFRESH PUBLICATION, is there any case where the
sequence message applied on the subscriber is transactional?
Regards,
--
Masahiko Sawada
EDB: https://www.enterprisedb.com/
On Fri, Apr 1, 2022 at 10:22 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
On Sat, Mar 26, 2022 at 6:56 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:But there's a more serious issue, I think. So far, we allowed this:
BEGIN;
CREATE SEQUENCE s2;
ALTER PUBLICATION p ADD SEQUENCE s2;
INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
COMMIT;and the behavior was that we replicated the changes. But with the patch
applied, that no longer happens, because should_apply_changes_for_rel
says the change should not be applied.And after thinking about this, I think that's correct - we can't apply
changes until ALTER SUBSCRIPTION ... REFRESH PUBLICATION gets executed,
and we can't do that until the transaction commits.So I guess that's correct, and the current behavior is a bug.
Yes, I also think that is a bug.
OK
I also think that this is a bug. Given this behavior is a bug and
newly-added sequence data should be replicated only after ALTER
SUBSCRIPTION ... REFRESH PUBLICATION, is there any case where the
sequence message applied on the subscriber is transactional?
It could be required for Alter Sequence as that can also rewrite the
relfilenode. However, IIUC, I think there is a bigger problem with
non-transactional sequence implementation as that can cause
inconsistent replica. See the problem description and test case in my
previous email [1]/messages/by-id/CAA4eK1KAFdQEULk+4C=ieWA5UPSUtf_gtqKsFj9J9f2c=8hm4g@mail.gmail.com (While thinking about this, I think I see a problem
with the non-transactional handling of sequences....). Can you please
once check that and let me know if I am missing something there? If
not, then I think we may need to first think of a solution for
non-transactional sequence handling.
[1]: /messages/by-id/CAA4eK1KAFdQEULk+4C=ieWA5UPSUtf_gtqKsFj9J9f2c=8hm4g@mail.gmail.com
--
With Regards,
Amit Kapila.
On 3/28/22 07:29, Amit Kapila wrote:
...
While thinking about this, I think I see a problem with the
non-transactional handling of sequences. It seems that we will skip
sending non-transactional sequence change if it occurs before the
decoding has reached a consistent point but the surrounding commit
occurs after a consistent point is reached. In such cases, the
corresponding DMLs like inserts will be sent but sequence changes
won't be sent. For example (this scenario is based on
twophase_snapshot.spec),Initial setup:
==============
Create table t1_seq(c1 int);
Create Sequence seq1;Test Execution via multiple sessions (this test allows insert in
session-2 to happen before we reach a consistent point and commit
happens after a consistent point):
=======================================================================================================Session-2:
Begin;
SELECT pg_current_xact_id();Session-1:
SELECT 'init' FROM pg_create_logical_replication_slot('test_slot',
'test_decoding', false, true);Session-3:
Begin;
SELECT pg_current_xact_id();Session-2:
Commit;
Begin;
INSERT INTO t1_seq SELECT nextval('seq1') FROM generate_series(1,100);Session-3:
Commit;Session-2:
Commit 'foo'Session-1:
SELECT data FROM pg_logical_slot_get_changes('test_slot', NULL, NULL,
'include-xids', 'false', 'skip-empty-xacts', '1');data
----------------------------------------------
BEGIN
table public.t1_seq: INSERT: c1[integer]:1
table public.t1_seq: INSERT: c1[integer]:2
table public.t1_seq: INSERT: c1[integer]:3
table public.t1_seq: INSERT: c1[integer]:4
table public.t1_seq: INSERT: c1[integer]:5
table public.t1_seq: INSERT: c1[integer]:6Now, if we normally try to decode such an insert, the result would be
something like:
data
------------------------------------------------------------------------------
sequence public.seq1: transactional:0 last_value: 33 log_cnt: 0 is_called:1
sequence public.seq1: transactional:0 last_value: 66 log_cnt: 0 is_called:1
sequence public.seq1: transactional:0 last_value: 99 log_cnt: 0 is_called:1
sequence public.seq1: transactional:0 last_value: 132 log_cnt: 0 is_called:1
BEGIN
table public.t1_seq: INSERT: c1[integer]:1
table public.t1_seq: INSERT: c1[integer]:2
table public.t1_seq: INSERT: c1[integer]:3
table public.t1_seq: INSERT: c1[integer]:4
table public.t1_seq: INSERT: c1[integer]:5
table public.t1_seq: INSERT: c1[integer]:6This will create an inconsistent replica as sequence changes won't be
replicated.
Hmm, that's interesting. I wonder if it can actually happen, though.
Have you been able to reproduce that, somehow?
I thought about changing snapshot dealing of
non-transactional sequence changes similar to transactional ones but
that also won't work because it is only at commit we decide whether we
can send the changes.
I wonder if there's some earlier LSN (similar to the consistent point)
which might be useful for this.
Or maybe we should queue even the non-transactional changes, not
per-transaction but in a global list, and then at each commit either
discard inspect them (at that point we know the lowest LSN for all
transactions and the consistent point). Seems complex, though.
For the transactional case, as we are considering the create sequence
operation as transactional, we would unnecessarily queue them even
though that is not required. Basically, they don't need to be
considered transactional and we can simply ignore such messages like
other DDLs. But for that probably we need to distinguish Alter/Create
case which may or may not be straightforward. Now, queuing them is
probably harmless unless it causes the transaction to spill/stream.
I'm not sure I follow. Why would we queue them unnecessarily?
Also, there's the bug with decoding changes in transactions that create
the sequence and add it to a publication. I think the agreement was that
this behavior was incorrect, we should not decode changes until the
subscription is refreshed. Doesn't that mean can't be any CREATE case,
just ALTER?
I still couldn't think completely about cases where a mix of
transactional and non-transactional changes occur in the same
transaction as I think it somewhat depends on what we want to do about
the above cases.
Understood. I need to think about this too.
At all places, it is mentioned
as creating a sequence for transactional cases which at the very least
need some tweak.Which places?
In comments like:
a. When decoding sequences, we differentiate between sequences created
in a (running) transaction and sequences created in other (already
committed) transactions.
b. ... But for new sequences, we need to handle them in a transactional way, ..
c. ... Change needs to be handled as transactional, because the
sequence was created in a transaction that is still running ...It seems all these places indicate a scenario of creating a sequence
whereas we want to do transactional stuff mainly for Alter.
Right, I'll think about how to clarify the comments.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 4/1/22 17:02, Tomas Vondra wrote:
On 3/28/22 07:29, Amit Kapila wrote:
...
While thinking about this, I think I see a problem with the
non-transactional handling of sequences. It seems that we will skip
sending non-transactional sequence change if it occurs before the
decoding has reached a consistent point but the surrounding commit
occurs after a consistent point is reached. In such cases, the
corresponding DMLs like inserts will be sent but sequence changes
won't be sent. For example (this scenario is based on
twophase_snapshot.spec),Initial setup:
==============
Create table t1_seq(c1 int);
Create Sequence seq1;Test Execution via multiple sessions (this test allows insert in
session-2 to happen before we reach a consistent point and commit
happens after a consistent point):
=======================================================================================================Session-2:
Begin;
SELECT pg_current_xact_id();Session-1:
SELECT 'init' FROM pg_create_logical_replication_slot('test_slot',
'test_decoding', false, true);Session-3:
Begin;
SELECT pg_current_xact_id();Session-2:
Commit;
Begin;
INSERT INTO t1_seq SELECT nextval('seq1') FROM generate_series(1,100);Session-3:
Commit;Session-2:
Commit 'foo'Session-1:
SELECT data FROM pg_logical_slot_get_changes('test_slot', NULL, NULL,
'include-xids', 'false', 'skip-empty-xacts', '1');data
----------------------------------------------
BEGIN
table public.t1_seq: INSERT: c1[integer]:1
table public.t1_seq: INSERT: c1[integer]:2
table public.t1_seq: INSERT: c1[integer]:3
table public.t1_seq: INSERT: c1[integer]:4
table public.t1_seq: INSERT: c1[integer]:5
table public.t1_seq: INSERT: c1[integer]:6Now, if we normally try to decode such an insert, the result would be
something like:
data
------------------------------------------------------------------------------
sequence public.seq1: transactional:0 last_value: 33 log_cnt: 0 is_called:1
sequence public.seq1: transactional:0 last_value: 66 log_cnt: 0 is_called:1
sequence public.seq1: transactional:0 last_value: 99 log_cnt: 0 is_called:1
sequence public.seq1: transactional:0 last_value: 132 log_cnt: 0 is_called:1
BEGIN
table public.t1_seq: INSERT: c1[integer]:1
table public.t1_seq: INSERT: c1[integer]:2
table public.t1_seq: INSERT: c1[integer]:3
table public.t1_seq: INSERT: c1[integer]:4
table public.t1_seq: INSERT: c1[integer]:5
table public.t1_seq: INSERT: c1[integer]:6This will create an inconsistent replica as sequence changes won't be
replicated.Hmm, that's interesting. I wonder if it can actually happen, though.
Have you been able to reproduce that, somehow?I thought about changing snapshot dealing of
non-transactional sequence changes similar to transactional ones but
that also won't work because it is only at commit we decide whether we
can send the changes.I wonder if there's some earlier LSN (similar to the consistent point)
which might be useful for this.Or maybe we should queue even the non-transactional changes, not
per-transaction but in a global list, and then at each commit either
discard inspect them (at that point we know the lowest LSN for all
transactions and the consistent point). Seems complex, though.For the transactional case, as we are considering the create sequence
operation as transactional, we would unnecessarily queue them even
though that is not required. Basically, they don't need to be
considered transactional and we can simply ignore such messages like
other DDLs. But for that probably we need to distinguish Alter/Create
case which may or may not be straightforward. Now, queuing them is
probably harmless unless it causes the transaction to spill/stream.I'm not sure I follow. Why would we queue them unnecessarily?
Also, there's the bug with decoding changes in transactions that create
the sequence and add it to a publication. I think the agreement was that
this behavior was incorrect, we should not decode changes until the
subscription is refreshed. Doesn't that mean can't be any CREATE case,
just ALTER?
So, I investigated this a bit more, and I wrote a couple test_decoding
isolation tests (patch attached) demonstrating the issue. Actually, I
should say "issues" because it's a bit worse than you described ...
The whole problem is in this chunk of code in sequence_decode():
/* Skip the change if already processed (per the snapshot). */
if (transactional &&
!SnapBuildProcessChange(builder, xid, buf->origptr))
return;
else if (!transactional &&
(SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
SnapBuildXactNeedsSkip(builder, buf->origptr)))
return;
/* Queue the increment (or send immediately if not transactional). */
snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
origin_id, target_node, transactional,
xlrec->created, tuplebuf);
With the script you described, the increment is non-transactional, so we
end up in the second branch, return and thus discard the increment.
But it's also possible the change is transactional, which can only
trigger the first branch. But it does not, so we start building the
snapshot. But the first thing SnapBuildGetOrBuildSnapshot does is
Assert(builder->state == SNAPBUILD_CONSISTENT);
and we're still not in a consistent snapshot, so it just crashes and
burn :-(
The sequences.spec file has two definitions of s2restart step, one empty
(resulting in non-transactional change), one with ALTER SEQUENCE (which
means the change will be transactional).
The really "funny" thing is this is not new code - this is an exact copy
from logicalmsg_decode(), and logical messages have all those issues
too. We may discard some messages, trigger the same Assert, etc. There's
a messages2.spec demonstrating this (s2message step defines whether the
message is transactional or not).
So I guess we need to fix both places, perhaps in a similar way. And one
of those will have to be backpatched (which may make it more complex).
The only option I see is reworking the decoding so that it does not need
the snapshot at all. We'll need to stash the changes just like any other
change, apply them at end of transaction, and the main difference
between transactional and non-transactional case would be what happens
at abort. Transactional changes would be discarded, non-transactional
would be applied anyway.
The challenge is this reorders the sequence changes, so we'll need to
reconcile them somehow. One option would be to simply (1) apply the
change with the highest LSN in the transaction, and then walk all other
in-progress transactions and changes for that sequence with a lower LSN.
Not sure how complex/expensive that would be, though. Another problem is
not all increments are WAL-logged, of course, not sure about that.
Another option might be revisiting the approach proposed by Hannu in
September [1]/messages/by-id/CAMT0RQQeDR51xs8zTa25YpfKB1B34nS-Q4hhsRPznVsjMB_P1w@mail.gmail.com, i.e. tracking sequences touched in a transaction, and
then replicating the current state at that particular moment.
regards
[1]: /messages/by-id/CAMT0RQQeDR51xs8zTa25YpfKB1B34nS-Q4hhsRPznVsjMB_P1w@mail.gmail.com
/messages/by-id/CAMT0RQQeDR51xs8zTa25YpfKB1B34nS-Q4hhsRPznVsjMB_P1w@mail.gmail.com
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
isolation-tests.patchtext/x-patch; charset=UTF-8; name=isolation-tests.patchDownload
diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index 36929dd97d3..0912a1237ed 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -9,7 +9,7 @@ REGRESS = ddl xact rewrite toast permissions decoding_in_xact \
sequence
ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \
oldest_xmin snapshot_transfer subxact_without_top concurrent_stream \
- twophase_snapshot slot_creation_error
+ twophase_snapshot slot_creation_error messages2 sequences
REGRESS_OPTS = --temp-config $(top_srcdir)/contrib/test_decoding/logical.conf
ISOLATION_OPTS = --temp-config $(top_srcdir)/contrib/test_decoding/logical.conf
diff --git a/contrib/test_decoding/expected/messages2.out b/contrib/test_decoding/expected/messages2.out
new file mode 100644
index 00000000000..c705693ed55
--- /dev/null
+++ b/contrib/test_decoding/expected/messages2.out
@@ -0,0 +1,78 @@
+Parsed test spec with 3 sessions
+
+starting permutation: s1init s2message s2insert s1start
+step s1init: SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot', 'test_decoding', false, true);
+?column?
+--------
+init
+(1 row)
+
+step s2message: SELECT pg_logical_emit_message(false, 'test', 'msg non-transactional');
+pg_logical_emit_message
+-----------------------
+0/1882B00
+(1 row)
+
+step s2insert: INSERT INTO t1_message VALUES ('msg');
+step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'skip-empty-xacts', '1');
+data
+----------------------------------------------------------------------------
+message: transactional: 0 prefix: test, sz: 21 content:msg non-transactional
+BEGIN
+table public.t1_message: INSERT: msg[text]:'msg'
+COMMIT
+(4 rows)
+
+?column?
+--------
+stop
+(1 row)
+
+
+starting permutation: s2begin s2txid s1init s3begin s3txid s2commit s2begin s2message s2insert s3commit s2commit s1start
+step s2begin: BEGIN;
+step s2txid: SELECT pg_current_xact_id();
+pg_current_xact_id
+------------------
+ 728
+(1 row)
+
+step s1init: SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot', 'test_decoding', false, true); <waiting ...>
+step s3begin: BEGIN;
+step s3txid: SELECT pg_current_xact_id();
+pg_current_xact_id
+------------------
+ 729
+(1 row)
+
+step s2commit: COMMIT;
+step s2begin: BEGIN;
+step s2message: SELECT pg_logical_emit_message(false, 'test', 'msg non-transactional');
+pg_logical_emit_message
+-----------------------
+0/1887998
+(1 row)
+
+step s2insert: INSERT INTO t1_message VALUES ('msg');
+step s3commit: COMMIT;
+step s1init: <... completed>
+?column?
+--------
+init
+(1 row)
+
+step s2commit: COMMIT;
+step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'skip-empty-xacts', '1');
+data
+------------------------------------------------
+message: transactional: 0 prefix: test, sz: 21 content:msg non-transactional
+BEGIN
+table public.t1_message: INSERT: msg[text]:'msg'
+COMMIT
+(3 rows)
+
+?column?
+--------
+stop
+(1 row)
+
diff --git a/contrib/test_decoding/expected/sequences.out b/contrib/test_decoding/expected/sequences.out
new file mode 100644
index 00000000000..36f27d9848b
--- /dev/null
+++ b/contrib/test_decoding/expected/sequences.out
@@ -0,0 +1,272 @@
+Parsed test spec with 3 sessions
+
+starting permutation: s1init s2restart s2insert s1start
+step s1init: SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot', 'test_decoding', false, true);
+?column?
+--------
+init
+(1 row)
+
+step s2restart:
+step s2insert: INSERT INTO t1_seq SELECT nextval('seq1') FROM generate_series(1,100);
+step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'skip-empty-xacts', '1');
+data
+----------------------------------------------------------------------------
+sequence public.seq1: transactional:0 last_value: 33 log_cnt: 0 is_called:1
+sequence public.seq1: transactional:0 last_value: 66 log_cnt: 0 is_called:1
+sequence public.seq1: transactional:0 last_value: 99 log_cnt: 0 is_called:1
+sequence public.seq1: transactional:0 last_value: 132 log_cnt: 0 is_called:1
+BEGIN
+table public.t1_seq: INSERT: c1[integer]:1
+table public.t1_seq: INSERT: c1[integer]:2
+table public.t1_seq: INSERT: c1[integer]:3
+table public.t1_seq: INSERT: c1[integer]:4
+table public.t1_seq: INSERT: c1[integer]:5
+table public.t1_seq: INSERT: c1[integer]:6
+table public.t1_seq: INSERT: c1[integer]:7
+table public.t1_seq: INSERT: c1[integer]:8
+table public.t1_seq: INSERT: c1[integer]:9
+table public.t1_seq: INSERT: c1[integer]:10
+table public.t1_seq: INSERT: c1[integer]:11
+table public.t1_seq: INSERT: c1[integer]:12
+table public.t1_seq: INSERT: c1[integer]:13
+table public.t1_seq: INSERT: c1[integer]:14
+table public.t1_seq: INSERT: c1[integer]:15
+table public.t1_seq: INSERT: c1[integer]:16
+table public.t1_seq: INSERT: c1[integer]:17
+table public.t1_seq: INSERT: c1[integer]:18
+table public.t1_seq: INSERT: c1[integer]:19
+table public.t1_seq: INSERT: c1[integer]:20
+table public.t1_seq: INSERT: c1[integer]:21
+table public.t1_seq: INSERT: c1[integer]:22
+table public.t1_seq: INSERT: c1[integer]:23
+table public.t1_seq: INSERT: c1[integer]:24
+table public.t1_seq: INSERT: c1[integer]:25
+table public.t1_seq: INSERT: c1[integer]:26
+table public.t1_seq: INSERT: c1[integer]:27
+table public.t1_seq: INSERT: c1[integer]:28
+table public.t1_seq: INSERT: c1[integer]:29
+table public.t1_seq: INSERT: c1[integer]:30
+table public.t1_seq: INSERT: c1[integer]:31
+table public.t1_seq: INSERT: c1[integer]:32
+table public.t1_seq: INSERT: c1[integer]:33
+table public.t1_seq: INSERT: c1[integer]:34
+table public.t1_seq: INSERT: c1[integer]:35
+table public.t1_seq: INSERT: c1[integer]:36
+table public.t1_seq: INSERT: c1[integer]:37
+table public.t1_seq: INSERT: c1[integer]:38
+table public.t1_seq: INSERT: c1[integer]:39
+table public.t1_seq: INSERT: c1[integer]:40
+table public.t1_seq: INSERT: c1[integer]:41
+table public.t1_seq: INSERT: c1[integer]:42
+table public.t1_seq: INSERT: c1[integer]:43
+table public.t1_seq: INSERT: c1[integer]:44
+table public.t1_seq: INSERT: c1[integer]:45
+table public.t1_seq: INSERT: c1[integer]:46
+table public.t1_seq: INSERT: c1[integer]:47
+table public.t1_seq: INSERT: c1[integer]:48
+table public.t1_seq: INSERT: c1[integer]:49
+table public.t1_seq: INSERT: c1[integer]:50
+table public.t1_seq: INSERT: c1[integer]:51
+table public.t1_seq: INSERT: c1[integer]:52
+table public.t1_seq: INSERT: c1[integer]:53
+table public.t1_seq: INSERT: c1[integer]:54
+table public.t1_seq: INSERT: c1[integer]:55
+table public.t1_seq: INSERT: c1[integer]:56
+table public.t1_seq: INSERT: c1[integer]:57
+table public.t1_seq: INSERT: c1[integer]:58
+table public.t1_seq: INSERT: c1[integer]:59
+table public.t1_seq: INSERT: c1[integer]:60
+table public.t1_seq: INSERT: c1[integer]:61
+table public.t1_seq: INSERT: c1[integer]:62
+table public.t1_seq: INSERT: c1[integer]:63
+table public.t1_seq: INSERT: c1[integer]:64
+table public.t1_seq: INSERT: c1[integer]:65
+table public.t1_seq: INSERT: c1[integer]:66
+table public.t1_seq: INSERT: c1[integer]:67
+table public.t1_seq: INSERT: c1[integer]:68
+table public.t1_seq: INSERT: c1[integer]:69
+table public.t1_seq: INSERT: c1[integer]:70
+table public.t1_seq: INSERT: c1[integer]:71
+table public.t1_seq: INSERT: c1[integer]:72
+table public.t1_seq: INSERT: c1[integer]:73
+table public.t1_seq: INSERT: c1[integer]:74
+table public.t1_seq: INSERT: c1[integer]:75
+table public.t1_seq: INSERT: c1[integer]:76
+table public.t1_seq: INSERT: c1[integer]:77
+table public.t1_seq: INSERT: c1[integer]:78
+table public.t1_seq: INSERT: c1[integer]:79
+table public.t1_seq: INSERT: c1[integer]:80
+table public.t1_seq: INSERT: c1[integer]:81
+table public.t1_seq: INSERT: c1[integer]:82
+table public.t1_seq: INSERT: c1[integer]:83
+table public.t1_seq: INSERT: c1[integer]:84
+table public.t1_seq: INSERT: c1[integer]:85
+table public.t1_seq: INSERT: c1[integer]:86
+table public.t1_seq: INSERT: c1[integer]:87
+table public.t1_seq: INSERT: c1[integer]:88
+table public.t1_seq: INSERT: c1[integer]:89
+table public.t1_seq: INSERT: c1[integer]:90
+table public.t1_seq: INSERT: c1[integer]:91
+table public.t1_seq: INSERT: c1[integer]:92
+table public.t1_seq: INSERT: c1[integer]:93
+table public.t1_seq: INSERT: c1[integer]:94
+table public.t1_seq: INSERT: c1[integer]:95
+table public.t1_seq: INSERT: c1[integer]:96
+table public.t1_seq: INSERT: c1[integer]:97
+table public.t1_seq: INSERT: c1[integer]:98
+table public.t1_seq: INSERT: c1[integer]:99
+table public.t1_seq: INSERT: c1[integer]:100
+COMMIT
+(106 rows)
+
+?column?
+--------
+stop
+(1 row)
+
+
+starting permutation: s2begin s2txid s1init s3begin s3txid s2commit s2begin s2restart s2insert s3commit s2commit s1start
+step s2begin: BEGIN;
+step s2txid: SELECT pg_current_xact_id();
+pg_current_xact_id
+------------------
+ 728
+(1 row)
+
+step s1init: SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot', 'test_decoding', false, true); <waiting ...>
+step s3begin: BEGIN;
+step s3txid: SELECT pg_current_xact_id();
+pg_current_xact_id
+------------------
+ 729
+(1 row)
+
+step s2commit: COMMIT;
+step s2begin: BEGIN;
+step s2restart:
+step s2insert: INSERT INTO t1_seq SELECT nextval('seq1') FROM generate_series(1,100);
+step s3commit: COMMIT;
+step s1init: <... completed>
+?column?
+--------
+init
+(1 row)
+
+step s2commit: COMMIT;
+step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'skip-empty-xacts', '1');
+data
+--------------------------------------------
+sequence public.seq1: transactional:0 last_value: 33 log_cnt: 0 is_called:1
+sequence public.seq1: transactional:0 last_value: 66 log_cnt: 0 is_called:1
+sequence public.seq1: transactional:0 last_value: 99 log_cnt: 0 is_called:1
+sequence public.seq1: transactional:0 last_value: 132 log_cnt: 0 is_called:1
+BEGIN
+table public.t1_seq: INSERT: c1[integer]:1
+table public.t1_seq: INSERT: c1[integer]:2
+table public.t1_seq: INSERT: c1[integer]:3
+table public.t1_seq: INSERT: c1[integer]:4
+table public.t1_seq: INSERT: c1[integer]:5
+table public.t1_seq: INSERT: c1[integer]:6
+table public.t1_seq: INSERT: c1[integer]:7
+table public.t1_seq: INSERT: c1[integer]:8
+table public.t1_seq: INSERT: c1[integer]:9
+table public.t1_seq: INSERT: c1[integer]:10
+table public.t1_seq: INSERT: c1[integer]:11
+table public.t1_seq: INSERT: c1[integer]:12
+table public.t1_seq: INSERT: c1[integer]:13
+table public.t1_seq: INSERT: c1[integer]:14
+table public.t1_seq: INSERT: c1[integer]:15
+table public.t1_seq: INSERT: c1[integer]:16
+table public.t1_seq: INSERT: c1[integer]:17
+table public.t1_seq: INSERT: c1[integer]:18
+table public.t1_seq: INSERT: c1[integer]:19
+table public.t1_seq: INSERT: c1[integer]:20
+table public.t1_seq: INSERT: c1[integer]:21
+table public.t1_seq: INSERT: c1[integer]:22
+table public.t1_seq: INSERT: c1[integer]:23
+table public.t1_seq: INSERT: c1[integer]:24
+table public.t1_seq: INSERT: c1[integer]:25
+table public.t1_seq: INSERT: c1[integer]:26
+table public.t1_seq: INSERT: c1[integer]:27
+table public.t1_seq: INSERT: c1[integer]:28
+table public.t1_seq: INSERT: c1[integer]:29
+table public.t1_seq: INSERT: c1[integer]:30
+table public.t1_seq: INSERT: c1[integer]:31
+table public.t1_seq: INSERT: c1[integer]:32
+table public.t1_seq: INSERT: c1[integer]:33
+table public.t1_seq: INSERT: c1[integer]:34
+table public.t1_seq: INSERT: c1[integer]:35
+table public.t1_seq: INSERT: c1[integer]:36
+table public.t1_seq: INSERT: c1[integer]:37
+table public.t1_seq: INSERT: c1[integer]:38
+table public.t1_seq: INSERT: c1[integer]:39
+table public.t1_seq: INSERT: c1[integer]:40
+table public.t1_seq: INSERT: c1[integer]:41
+table public.t1_seq: INSERT: c1[integer]:42
+table public.t1_seq: INSERT: c1[integer]:43
+table public.t1_seq: INSERT: c1[integer]:44
+table public.t1_seq: INSERT: c1[integer]:45
+table public.t1_seq: INSERT: c1[integer]:46
+table public.t1_seq: INSERT: c1[integer]:47
+table public.t1_seq: INSERT: c1[integer]:48
+table public.t1_seq: INSERT: c1[integer]:49
+table public.t1_seq: INSERT: c1[integer]:50
+table public.t1_seq: INSERT: c1[integer]:51
+table public.t1_seq: INSERT: c1[integer]:52
+table public.t1_seq: INSERT: c1[integer]:53
+table public.t1_seq: INSERT: c1[integer]:54
+table public.t1_seq: INSERT: c1[integer]:55
+table public.t1_seq: INSERT: c1[integer]:56
+table public.t1_seq: INSERT: c1[integer]:57
+table public.t1_seq: INSERT: c1[integer]:58
+table public.t1_seq: INSERT: c1[integer]:59
+table public.t1_seq: INSERT: c1[integer]:60
+table public.t1_seq: INSERT: c1[integer]:61
+table public.t1_seq: INSERT: c1[integer]:62
+table public.t1_seq: INSERT: c1[integer]:63
+table public.t1_seq: INSERT: c1[integer]:64
+table public.t1_seq: INSERT: c1[integer]:65
+table public.t1_seq: INSERT: c1[integer]:66
+table public.t1_seq: INSERT: c1[integer]:67
+table public.t1_seq: INSERT: c1[integer]:68
+table public.t1_seq: INSERT: c1[integer]:69
+table public.t1_seq: INSERT: c1[integer]:70
+table public.t1_seq: INSERT: c1[integer]:71
+table public.t1_seq: INSERT: c1[integer]:72
+table public.t1_seq: INSERT: c1[integer]:73
+table public.t1_seq: INSERT: c1[integer]:74
+table public.t1_seq: INSERT: c1[integer]:75
+table public.t1_seq: INSERT: c1[integer]:76
+table public.t1_seq: INSERT: c1[integer]:77
+table public.t1_seq: INSERT: c1[integer]:78
+table public.t1_seq: INSERT: c1[integer]:79
+table public.t1_seq: INSERT: c1[integer]:80
+table public.t1_seq: INSERT: c1[integer]:81
+table public.t1_seq: INSERT: c1[integer]:82
+table public.t1_seq: INSERT: c1[integer]:83
+table public.t1_seq: INSERT: c1[integer]:84
+table public.t1_seq: INSERT: c1[integer]:85
+table public.t1_seq: INSERT: c1[integer]:86
+table public.t1_seq: INSERT: c1[integer]:87
+table public.t1_seq: INSERT: c1[integer]:88
+table public.t1_seq: INSERT: c1[integer]:89
+table public.t1_seq: INSERT: c1[integer]:90
+table public.t1_seq: INSERT: c1[integer]:91
+table public.t1_seq: INSERT: c1[integer]:92
+table public.t1_seq: INSERT: c1[integer]:93
+table public.t1_seq: INSERT: c1[integer]:94
+table public.t1_seq: INSERT: c1[integer]:95
+table public.t1_seq: INSERT: c1[integer]:96
+table public.t1_seq: INSERT: c1[integer]:97
+table public.t1_seq: INSERT: c1[integer]:98
+table public.t1_seq: INSERT: c1[integer]:99
+table public.t1_seq: INSERT: c1[integer]:100
+COMMIT
+(102 rows)
+
+?column?
+--------
+stop
+(1 row)
+
diff --git a/contrib/test_decoding/specs/messages2.spec b/contrib/test_decoding/specs/messages2.spec
new file mode 100644
index 00000000000..1300b781b6c
--- /dev/null
+++ b/contrib/test_decoding/specs/messages2.spec
@@ -0,0 +1,45 @@
+# Test decoding of two-phase transactions during the build of a consistent snapshot.
+setup
+{
+ DROP TABLE IF EXISTS t1_message;
+ CREATE TABLE t1_message(msg text);
+}
+
+teardown
+{
+ DROP TABLE t1_message;
+ SELECT 'stop' FROM pg_drop_replication_slot('isolation_slot');
+}
+
+
+session "s1"
+setup { SET synchronous_commit=on; }
+
+step "s1init" {SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot', 'test_decoding', false, true);}
+step "s1start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'skip-empty-xacts', '1');}
+
+
+session "s2"
+setup { SET synchronous_commit=on; }
+
+step "s2txid" {SELECT pg_current_xact_id();}
+step "s2begin" {BEGIN;}
+step "s2commit" {COMMIT;}
+step "s2message" {SELECT pg_logical_emit_message(false, 'test', 'msg non-transactional');}
+#step "s2message" {SELECT pg_logical_emit_message(true, 'test', 'msg transactional');}
+step "s2insert" {INSERT INTO t1_message VALUES ('msg');}
+
+session "s3"
+setup { SET synchronous_commit=on; }
+
+step "s3txid" {SELECT pg_current_xact_id();}
+step "s3begin" {BEGIN;}
+step "s3commit" {COMMIT;}
+
+# Simple case, with consistent state before the whole insert, non-transactional
+#
+permutation "s1init" "s2message" "s2insert" "s1start"
+
+# Force building of a consistent snapshot after the INSERT but before COMMIT, non-transactional
+#
+permutation "s2begin" "s2txid" "s1init" "s3begin" "s3txid" "s2commit" "s2begin" "s2message" "s2insert" "s3commit" "s2commit" "s1start"
diff --git a/contrib/test_decoding/specs/sequences.spec b/contrib/test_decoding/specs/sequences.spec
new file mode 100644
index 00000000000..0f2cd06a7d3
--- /dev/null
+++ b/contrib/test_decoding/specs/sequences.spec
@@ -0,0 +1,49 @@
+# Test decoding of two-phase transactions during the build of a consistent snapshot.
+setup
+{
+ DROP TABLE IF EXISTS t1_seq;
+ DROP SEQUENCE IF EXISTS seq1;
+
+ CREATE TABLE t1_seq(c1 INT);
+ CREATE SEQUENCE seq1;
+}
+
+teardown
+{
+ DROP TABLE t1_seq;
+ DROP SEQUENCE seq1;
+ SELECT 'stop' FROM pg_drop_replication_slot('isolation_slot');
+}
+
+
+session "s1"
+setup { SET synchronous_commit=on; }
+
+step "s1init" {SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot', 'test_decoding', false, true);}
+step "s1start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'skip-empty-xacts', '1');}
+
+
+session "s2"
+setup { SET synchronous_commit=on; }
+
+step "s2txid" {SELECT pg_current_xact_id();}
+step "s2begin" {BEGIN;}
+step "s2commit" {COMMIT;}
+step "s2insert" {INSERT INTO t1_seq SELECT nextval('seq1') FROM generate_series(1,100);}
+#step "s2restart" {ALTER SEQUENCE seq1 RESTART;}
+step "s2restart" {}
+
+session "s3"
+setup { SET synchronous_commit=on; }
+
+step "s3txid" {SELECT pg_current_xact_id();}
+step "s3begin" {BEGIN;}
+step "s3commit" {COMMIT;}
+
+# Simple case, with consistent state before the whole insert
+#
+permutation "s1init" "s2restart" "s2insert" "s1start"
+
+# Force building of a consistent snapshot after the INSERT but before COMMIT
+#
+permutation "s2begin" "s2txid" "s1init" "s3begin" "s3txid" "s2commit" "s2begin" "s2restart" "s2insert" "s3commit" "s2commit" "s1start"
On Fri, Apr 1, 2022 at 8:32 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
On 3/28/22 07:29, Amit Kapila wrote:
I thought about changing snapshot dealing of
non-transactional sequence changes similar to transactional ones but
that also won't work because it is only at commit we decide whether we
can send the changes.I wonder if there's some earlier LSN (similar to the consistent point)
which might be useful for this.Or maybe we should queue even the non-transactional changes, not
per-transaction but in a global list, and then at each commit either
discard inspect them (at that point we know the lowest LSN for all
transactions and the consistent point). Seems complex, though.
I couldn't follow '..discard inspect them ..'. Do you mean we inspect
them and discard whichever are not required? It seems here we are
talking about a new global ReorderBufferGlobal instead of
ReorderBufferTXN to collect these changes but we don't need only
consistent point LSN because we do send if the commit of containing
transaction is after consistent point LSN, so we need some transaction
information as well. I think it could bring new challenges.
For the transactional case, as we are considering the create sequence
operation as transactional, we would unnecessarily queue them even
though that is not required. Basically, they don't need to be
considered transactional and we can simply ignore such messages like
other DDLs. But for that probably we need to distinguish Alter/Create
case which may or may not be straightforward. Now, queuing them is
probably harmless unless it causes the transaction to spill/stream.I'm not sure I follow. Why would we queue them unnecessarily?
Also, there's the bug with decoding changes in transactions that create
the sequence and add it to a publication. I think the agreement was that
this behavior was incorrect, we should not decode changes until the
subscription is refreshed. Doesn't that mean can't be any CREATE case,
just ALTER?
Yeah, but how will we distinguish them. Aren't they using the same
kind of WAL record?
--
With Regards,
Amit Kapila.
On Sat, Apr 2, 2022 at 5:47 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
On 4/1/22 17:02, Tomas Vondra wrote:
The only option I see is reworking the decoding so that it does not need
the snapshot at all. We'll need to stash the changes just like any other
change, apply them at end of transaction, and the main difference
between transactional and non-transactional case would be what happens
at abort. Transactional changes would be discarded, non-transactional
would be applied anyway.
I think in the above I am not following how we can make it work
without considering *snapshot at all* because based on that we would
have done the initial sync (copy_sequence) and if we don't follow that
later it can lead to inconsistency. I might be missing something here.
The challenge is this reorders the sequence changes, so we'll need to
reconcile them somehow. One option would be to simply (1) apply the
change with the highest LSN in the transaction, and then walk all other
in-progress transactions and changes for that sequence with a lower LSN.
Not sure how complex/expensive that would be, though. Another problem is
not all increments are WAL-logged, of course, not sure about that.Another option might be revisiting the approach proposed by Hannu in
September [1], i.e. tracking sequences touched in a transaction, and
then replicating the current state at that particular moment.
I'll think about that approach as well.
--
With Regards,
Amit Kapila.
On 4/2/22 12:35, Amit Kapila wrote:
On Fri, Apr 1, 2022 at 8:32 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 3/28/22 07:29, Amit Kapila wrote:
I thought about changing snapshot dealing of
non-transactional sequence changes similar to transactional ones but
that also won't work because it is only at commit we decide whether we
can send the changes.I wonder if there's some earlier LSN (similar to the consistent point)
which might be useful for this.Or maybe we should queue even the non-transactional changes, not
per-transaction but in a global list, and then at each commit either
discard inspect them (at that point we know the lowest LSN for all
transactions and the consistent point). Seems complex, though.I couldn't follow '..discard inspect them ..'. Do you mean we inspect
them and discard whichever are not required? It seems here we are
talking about a new global ReorderBufferGlobal instead of
ReorderBufferTXN to collect these changes but we don't need only
consistent point LSN because we do send if the commit of containing
transaction is after consistent point LSN, so we need some transaction
information as well. I think it could bring new challenges.
Sorry for the gibberish. Yes, I meant to discard sequence changes that
are no longer needed, due to being "obsoleted" by the applied change. We
must not apply "older" changes (using LSN) because that would make the
sequence go backwards.
I'm not entirely sure whether the list of changes should be kept in TXN
or in the global reorderbuffer object - we need to track which TXN the
change belongs to (because of transactional changes) but we also need to
discard the unnecessary changes efficiently (and walking TXN might be
expensive).
But yes, I'm sure there will be challenges. One being that tracking just
the decoded WAL stuff is not enough, because nextval() may not generate
WAL. But we still need to make sure the increment is replicated.
What I think we might do is this:
- add a global list of decoded sequence increments to ReorderBuffer
- at each commit/abort walk the list, walk the list and consider all
increments up to the commit LSN that "match" (non-transactional match
all TXNs, transactional match only the current TXN)
- replicate the last "matching" status for each sequence, discard the
processed ones
We could probably optimize this by not tracking every single increment,
but merge them "per transaction", I think.
I'm sure this description is pretty rough and will need refining, handle
various corner-cases etc.
For the transactional case, as we are considering the create sequence
operation as transactional, we would unnecessarily queue them even
though that is not required. Basically, they don't need to be
considered transactional and we can simply ignore such messages like
other DDLs. But for that probably we need to distinguish Alter/Create
case which may or may not be straightforward. Now, queuing them is
probably harmless unless it causes the transaction to spill/stream.I'm not sure I follow. Why would we queue them unnecessarily?
Also, there's the bug with decoding changes in transactions that create
the sequence and add it to a publication. I think the agreement was that
this behavior was incorrect, we should not decode changes until the
subscription is refreshed. Doesn't that mean can't be any CREATE case,
just ALTER?Yeah, but how will we distinguish them. Aren't they using the same
kind of WAL record?
Same WAL record, but the "created" flag which should distinguish these
two cases, IIRC.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 4/2/22 12:43, Amit Kapila wrote:
On Sat, Apr 2, 2022 at 5:47 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 4/1/22 17:02, Tomas Vondra wrote:
The only option I see is reworking the decoding so that it does not need
the snapshot at all. We'll need to stash the changes just like any other
change, apply them at end of transaction, and the main difference
between transactional and non-transactional case would be what happens
at abort. Transactional changes would be discarded, non-transactional
would be applied anyway.I think in the above I am not following how we can make it work
without considering *snapshot at all* because based on that we would
have done the initial sync (copy_sequence) and if we don't follow that
later it can lead to inconsistency. I might be missing something here.
Well, what I meant to say is that we can't consider the snapshot at this
phase of decoding. We'd still consider it later, at commit/abort time,
of course. I.e. it'd be fairly similar to what heap_decode/DecodeInsert
does, for example. AFAIK this does not build the snapshot anywhere.
The challenge is this reorders the sequence changes, so we'll need to
reconcile them somehow. One option would be to simply (1) apply the
change with the highest LSN in the transaction, and then walk all other
in-progress transactions and changes for that sequence with a lower LSN.
Not sure how complex/expensive that would be, though. Another problem is
not all increments are WAL-logged, of course, not sure about that.Another option might be revisiting the approach proposed by Hannu in
September [1], i.e. tracking sequences touched in a transaction, and
then replicating the current state at that particular moment.I'll think about that approach as well.
Thanks!
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 4/2/22 13:51, Tomas Vondra wrote:
On 4/2/22 12:35, Amit Kapila wrote:
On Fri, Apr 1, 2022 at 8:32 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 3/28/22 07:29, Amit Kapila wrote:
I thought about changing snapshot dealing of
non-transactional sequence changes similar to transactional ones but
that also won't work because it is only at commit we decide whether we
can send the changes.I wonder if there's some earlier LSN (similar to the consistent point)
which might be useful for this.Or maybe we should queue even the non-transactional changes, not
per-transaction but in a global list, and then at each commit either
discard inspect them (at that point we know the lowest LSN for all
transactions and the consistent point). Seems complex, though.I couldn't follow '..discard inspect them ..'. Do you mean we inspect
them and discard whichever are not required? It seems here we are
talking about a new global ReorderBufferGlobal instead of
ReorderBufferTXN to collect these changes but we don't need only
consistent point LSN because we do send if the commit of containing
transaction is after consistent point LSN, so we need some transaction
information as well. I think it could bring new challenges.Sorry for the gibberish. Yes, I meant to discard sequence changes that
are no longer needed, due to being "obsoleted" by the applied change. We
must not apply "older" changes (using LSN) because that would make the
sequence go backwards.I'm not entirely sure whether the list of changes should be kept in TXN
or in the global reorderbuffer object - we need to track which TXN the
change belongs to (because of transactional changes) but we also need to
discard the unnecessary changes efficiently (and walking TXN might be
expensive).But yes, I'm sure there will be challenges. One being that tracking just
the decoded WAL stuff is not enough, because nextval() may not generate
WAL. But we still need to make sure the increment is replicated.What I think we might do is this:
- add a global list of decoded sequence increments to ReorderBuffer
- at each commit/abort walk the list, walk the list and consider all
increments up to the commit LSN that "match" (non-transactional match
all TXNs, transactional match only the current TXN)- replicate the last "matching" status for each sequence, discard the
processed onesWe could probably optimize this by not tracking every single increment,
but merge them "per transaction", I think.I'm sure this description is pretty rough and will need refining, handle
various corner-cases etc.
I did some experiments over the weekend, exploring how to rework the
sequence decoding in various ways. Let me share some WIP patches,
hopefully that can be useful for trying more stuff and moving this
discussion forward.
I tried two things - (1) accumulating sequence increments in global
array and then doing something with it, and (2) treating all sequence
increments as regular changes (in a TXN) and then doing something
special during the replay. Attached are two patchsets, one for each
approach.
Note: It's important to remember decoding of sequences is not the only
code affected by this. The logical messages have the same issue,
certainly when it comes to transactional vs. non-transactional stuff and
handling of snapshots. Even if the sequence decoding ends up being
reverted, we still need to fix that, somehow. And my feeling is the
solutions ought to be pretty similar in both cases.
Now, regarding the two approaches:
(1) accumulating sequences in global hash table
The main problem with regular sequence increments is that those need to
be non-transactional - a transaction may use a sequence without any
WAL-logging, if the WAL was written by an earlier transaction. The
problem is the earlier trasaction might have been rolled back, and thus
simply discarded by the logical decoding. But we still need to apply
that, in order not to lose the sequence increment.
The current code just applies those non-transactional increments right
after decoding the increment, but that does not work because we may not
have a snapshot at that point. And we only have the snapshot when within
a transaction (AFAICS) so this queues all changes and then applies the
changes later.
The changes need to be shared by all transactions, so queueing them in a
global works fairly well - otherwise we'd have to walk all transactions,
in order to see if there are relevant sequence increments.
But some increments may be transactional, e.g. when the sequence is
created or altered in a transaction. To allow tracking this, this uses a
hash table, with relfilenode as a key.
There's a couple issues with this, though. Firstly, stashing the changes
outside transactions, it's not included in memory accounting, it's not
spilled to disk or streamed, etc. I guess fixing this is possible, but
it's certainly not straightforward, because we mix increments from many
different transactions.
A bigger issue is that I'm not sure this actually handles the snapshots
correctly either.
The non-transactional increments affect all transactions, so when
ReorderBufferProcessSequences gets executed, it processes all of them,
no matter the source transaction. Can we be sure the snapshot in the
applying transaction is the same (or "compatible") as the snapshot in
the source transaction?
Transactional increments can be simply processed as regular changes, of
course, but one difference is that we always create the transaction
(while before we just triggered the apply callback). This is necessary
as now we drive all of this from ReorderBufferCommit(), and without the
transaction the increment not be applied / there would be no snapshot.
It does seem to work, though, although I haven't tested it much so far.
One annoying bit is that we have to always walk all sequences and
increments, for each change in the transaction. Which seems quite
expensive, although the number of in-progress increments should be
pretty low (roughly equal to the number of sequences). Or at least the
part we need to consider for a single change (i.e. between two LSNs).
So maybe this should work. The one part this does not handle at all are
aborted transactions. At the moment we just discard those, which means
(a) we fail to discard the transactional changes from the hash table,
and (b) we can't apply the non-transactional changes, because with the
changes we also discard the snapshots we need.
I wonder if we could use a different snapshot, though. Non-transactional
changes can't change the relfilenode, after all. Not sure. If not, the
only solution I can think of is processing even aborted transactions,
but skipping changes except those that update snapshots.
There's a serious problem with streaming, though - we don't know which
transaction will commit first, hence we can't decide whether to send the
sequence changes. This seems pretty fatal to me. So we'd have to stream
the sequence changes only at commit, and then do some of this work on
the worker (i.e. merge the sequence changes to the right place). That
seems pretty annoying.
(2) treating sequence change as regular changes
This adopts a different approach - instead of accumulating the sequence
increments in a global hash table, it treats them as regular changes.
Which solves the snapshot issue, and issues with spilling to disk,
streaming and so on.
But it has various other issues with handling concurrent transactions,
unfortunately, which probably make this approach infeasible:
* The non-transactional stuff has to be applied in the first transaction
that commits, not in the transaction that generated the WAL. That does
not work too well with this approach, because we have to walk changes in
all other transactions.
* Another serious issue seems to be streaming - if we already streamed
some of the changes, we can't iterate through them anymore.
Also, having to walk the transactions over and over for each change, to
apply relevant sequence increments, that's mighty expensive. The other
approach needs to do that too, but walking the global hash table seems
much cheaper.
The other issue this handling of aborted transactions - we need to apply
sequence increments even from those transactions, of course. The other
approach has this issue too, though.
(3) tracking sequences touched by transaction
This is the approach proposed by Hannu Krosing. I haven't explored this
again yet, but I recall I wrote a PoC patch a couple months back.
It seems to me most of the problems stems from trying to derive sequence
state from decoded WAL changes, which is problematic because of the
non-transactional nature of sequences (i.e. WAL for one transaction
affects other transactions in non-obvious ways). And this approach
simply works around that entirely - instead of trying to deduce the
sequence state from WAL, we'd make sure to write the current sequence
state (or maybe just ID of the sequence) at commit time. Which should
eliminate most of the complexity / problems, I think.
I'm not really sure what to do about this. All of those reworks seems
like an extensive redesign of the patch, and considering the last CF is
already over ... not great.
However, even if we end up reverting this, we'll still have the same
problem with snapshots for logical messages.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
vglobal-0001-WIP-sequence-reworks.patchtext/x-patch; charset=UTF-8; name=vglobal-0001-WIP-sequence-reworks.patchDownload
From 1c81414bc3b118b8b26489ec8731e8590e3b4806 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Sat, 2 Apr 2022 20:44:15 +0200
Subject: [PATCH vglobal 1/2] WIP: sequence reworks
---
src/backend/replication/logical/decode.c | 34 +-
.../replication/logical/reorderbuffer.c | 475 +++++++-----------
src/include/replication/reorderbuffer.h | 36 +-
3 files changed, 223 insertions(+), 322 deletions(-)
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 77bc7aea7a0..77db984f594 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -1313,9 +1313,7 @@ sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
TransactionId xid = XLogRecGetXid(r);
uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
xl_seq_rec *xlrec;
- Snapshot snapshot;
RepOriginId origin_id = XLogRecGetOrigin(r);
- bool transactional;
/* only decode changes flagged with XLOG_SEQ_LOG */
if (info != XLOG_SEQ_LOG)
@@ -1351,33 +1349,15 @@ sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
if(!datalen || !tupledata)
return;
+ /* sets the snapshot etc. */
+ if (!SnapBuildProcessChange(builder, xid, buf->origptr))
+ return;
+
tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
DecodeSeqTuple(tupledata, datalen, tuplebuf);
- /*
- * Should we handle the sequence increment as transactional or not?
- *
- * If the sequence was created in a still-running transaction, treat
- * it as transactional and queue the increments. Otherwise it needs
- * to be treated as non-transactional, in which case we send it to
- * the plugin right away.
- */
- transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
- target_node,
- xlrec->created);
-
- /* Skip the change if already processed (per the snapshot). */
- if (transactional &&
- !SnapBuildProcessChange(builder, xid, buf->origptr))
- return;
- else if (!transactional &&
- (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
- SnapBuildXactNeedsSkip(builder, buf->origptr)))
- return;
-
- /* Queue the increment (or send immediately if not transactional). */
- snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
- ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
- origin_id, target_node, transactional,
+ /* queue the sequence increment */
+ ReorderBufferQueueSequence(ctx->reorder, xid, buf->endptr,
+ origin_id, target_node,
xlrec->created, tuplebuf);
}
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index c2d9be81fae..3113076a690 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -156,6 +156,8 @@ typedef struct ReorderBufferSequenceEnt
{
RelFileNode rnode;
TransactionId xid;
+ bool transactional;
+ dlist_head changes;
} ReorderBufferSequenceEnt;
/* data structures for (relfilenode, ctid) => (cmin, cmax) mapping */
@@ -365,6 +367,11 @@ ReorderBufferAllocate(void)
SLAB_DEFAULT_BLOCK_SIZE,
sizeof(ReorderBufferChange));
+ buffer->sequence_context = SlabContextCreate(new_ctx,
+ "Sequence",
+ SLAB_DEFAULT_BLOCK_SIZE,
+ sizeof(ReorderBufferSequenceChange));
+
buffer->txn_context = SlabContextCreate(new_ctx,
"TXN",
SLAB_DEFAULT_BLOCK_SIZE,
@@ -516,6 +523,21 @@ ReorderBufferGetChange(ReorderBuffer *rb)
return change;
}
+/*
+ * Get a fresh ReorderBufferSequenceChange.
+ */
+static ReorderBufferSequenceChange *
+ReorderBufferGetSequenceChange(ReorderBuffer *rb)
+{
+ ReorderBufferSequenceChange *change;
+
+ change = (ReorderBufferSequenceChange *)
+ MemoryContextAlloc(rb->sequence_context, sizeof(ReorderBufferSequenceChange));
+
+ memset(change, 0, sizeof(ReorderBufferSequenceChange));
+ return change;
+}
+
/*
* Free a ReorderBufferChange and update memory accounting, if requested.
*/
@@ -575,13 +597,6 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
change->data.truncate.relids = NULL;
}
break;
- case REORDER_BUFFER_CHANGE_SEQUENCE:
- if (change->data.sequence.tuple)
- {
- ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
- change->data.sequence.tuple = NULL;
- }
- break;
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
@@ -923,48 +938,26 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
* so we simply do a lookup (the sequence is identified by relfilende). If
* we find a match, the increment should be handled as transactional.
*/
-bool
+static bool
ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
RelFileNode rnode, bool created)
{
bool found = false;
+ ReorderBufferSequenceEnt *ent;
if (created)
return true;
- hash_search(rb->sequences,
- (void *) &rnode,
- HASH_FIND,
- &found);
-
- return found;
-}
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND,
+ &found);
-/*
- * Cleanup sequences created in in-progress transactions.
- *
- * There's no way to search by XID, so we simply do a seqscan of all
- * the entries in the hash table. Hopefully there are only a couple
- * entries in most cases - people generally don't create many new
- * sequences over and over.
- */
-static void
-ReorderBufferSequenceCleanup(ReorderBuffer *rb, TransactionId xid)
-{
- HASH_SEQ_STATUS scan_status;
- ReorderBufferSequenceEnt *ent;
-
- hash_seq_init(&scan_status, rb->sequences);
- while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
- {
- /* skip sequences not from this transaction */
- if (ent->xid != xid)
- continue;
+ /* if not found, it's definitely non-transactional */
+ if (!found)
+ return false;
- (void) hash_search(rb->sequences,
- (void *) &(ent->rnode),
- HASH_REMOVE, NULL);
- }
+ return ent->transactional;
}
/*
@@ -978,166 +971,97 @@ ReorderBufferSequenceCleanup(ReorderBuffer *rb, TransactionId xid)
*/
void
ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
- Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
- RelFileNode rnode, bool transactional, bool created,
+ XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool created,
ReorderBufferTupleBuf *tuplebuf)
{
- /*
- * Change needs to be handled as transactional, because the sequence was
- * created in a transaction that is still running. In that case all the
- * changes need to be queued in that transaction, we must not send them
- * to the downstream until the transaction commits.
- *
- * There's a bit of a trouble with subtransactions - we can't queue it
- * into the subxact, because it might be rolled back and we'd lose the
- * increment. We need to queue it into the same (sub)xact that created
- * the sequence, which is why we track the XID in the hash table.
- */
- if (transactional)
- {
- MemoryContext oldcontext;
- ReorderBufferChange *change;
-
- /* lookup sequence by relfilenode */
- ReorderBufferSequenceEnt *ent;
- bool found;
-
- /* transactional changes require a transaction */
- Assert(xid != InvalidTransactionId);
-
- /* search the lookup table (we ignore the return value, found is enough) */
- ent = hash_search(rb->sequences,
- (void *) &rnode,
- created ? HASH_ENTER : HASH_FIND,
- &found);
-
- /*
- * If this is the "create" increment, we must not have found any
- * pre-existing entry in the hash table (i.e. there must not be
- * any conflicting sequence).
- */
- Assert(!(created && found));
-
- /* But we must have either created or found an existing entry. */
- Assert(created || found);
-
- /*
- * When creating the sequence, remember the XID of the transaction
- * that created id.
- */
- if (created)
- ent->xid = xid;
+ bool transactional;
+ MemoryContext oldcontext;
+ ReorderBufferSequenceChange *change;
- /* XXX Maybe check that we're still in the same top-level xact? */
+ /* lookup sequence by relfilenode */
+ ReorderBufferSequenceEnt *ent;
+ bool found;
- /* OK, allocate and queue the change */
- oldcontext = MemoryContextSwitchTo(rb->context);
+ transactional = ReorderBufferSequenceIsTransactional(rb, rnode, created);
- change = ReorderBufferGetChange(rb);
+ /*
+ * Maintenance of the sequences hash, so that we can distinguish which
+ * additional increments are transactional and non-transactional.
+ */
- change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
- change->origin_id = origin_id;
+ /* transactional changes require a transaction */
+ Assert(!(transactional && (xid == InvalidTransactionId)));
- memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
+ /* search the lookup table */
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_ENTER,
+ &found);
- change->data.sequence.tuple = tuplebuf;
+ /*
+ * If this is the "create" increment, we must not have found any entry in
+ * the hash table (i.e. there must not be any conflicting sequence with the
+ * same relfilenode).
+ */
+ Assert(!(created && found));
- /* add it to the same subxact that created the sequence */
- ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+ /*
+ * It's either non-transactioanl change, creation of a relfilenode, or we
+ * should have found an existing entry.
+ */
+ Assert(!transactional || created || found);
- MemoryContextSwitchTo(oldcontext);
- }
- else
+ if (!found)
{
- /*
- * This increment is for a sequence that was not created in any
- * running transaction, so we treat it as non-transactional and
- * just send it to the output plugin directly.
- */
- ReorderBufferTXN *txn = NULL;
- volatile Snapshot snapshot_now = snapshot;
- bool using_subtxn;
-
-#ifdef USE_ASSERT_CHECKING
- /* All "creates" have to be handled as transactional. */
- Assert(!created);
-
- /* Make sure the sequence is not in the hash table. */
- {
- bool found;
- hash_search(rb->sequences,
- (void *) &rnode,
- HASH_FIND, &found);
- Assert(!found);
- }
-#endif
-
- if (xid != InvalidTransactionId)
- txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
-
- /* setup snapshot to allow catalog access */
- SetupHistoricSnapshot(snapshot_now, NULL);
-
- /*
- * Decoding needs access to syscaches et al., which in turn use
- * heavyweight locks and such. Thus we need to have enough state around to
- * keep track of those. The easiest way is to simply use a transaction
- * internally. That also allows us to easily enforce that nothing writes
- * to the database by checking for xid assignments.
- *
- * When we're called via the SQL SRF there's already a transaction
- * started, so start an explicit subtransaction there.
- */
- using_subtxn = IsTransactionOrTransactionBlock();
-
- PG_TRY();
- {
- Relation relation;
- HeapTuple tuple;
- Form_pg_sequence_data seq;
- Oid reloid;
-
- if (using_subtxn)
- BeginInternalSubTransaction("sequence");
- else
- StartTransactionCommand();
-
- reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
+ ent->transactional = transactional;
+ dlist_init(&ent->changes);
+ }
- if (reloid == InvalidOid)
- elog(ERROR, "could not map filenode \"%s\" to relation OID",
- relpathperm(rnode,
- MAIN_FORKNUM));
+ /*
+ * When creating the sequence, remember the XID of the transaction
+ * that created id.
+ */
+ if (created)
+ ent->xid = xid;
- relation = RelationIdGetRelation(reloid);
- tuple = &tuplebuf->tuple;
- seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ /*
+ * Queue the sequence change, into a global list.
+ *
+ * There's a bit of a trouble with subtransactions - we can't queue it
+ * into the subxact, because it might be rolled back and we'd lose the
+ * increment. We need to queue it into the same (sub)xact that created
+ * the sequence, which is why we track the XID in the hash table.
+ */
+ oldcontext = MemoryContextSwitchTo(rb->context);
- rb->sequence(rb, txn, lsn, relation, transactional,
- seq->last_value, seq->log_cnt, seq->is_called);
+ change = ReorderBufferGetSequenceChange(rb);
- RelationClose(relation);
+ change->origin_id = origin_id;
- TeardownHistoricSnapshot(false);
+ memcpy(&change->relnode, &rnode, sizeof(RelFileNode));
- AbortCurrentTransaction();
+ change->tuple = tuplebuf;
+ change->transactional = transactional;
- if (using_subtxn)
- RollbackAndReleaseCurrentSubTransaction();
- }
- PG_CATCH();
- {
- TeardownHistoricSnapshot(true);
+ Assert(InvalidXLogRecPtr != lsn);
+ change->lsn = lsn;
- AbortCurrentTransaction();
+ if (xid != InvalidTransactionId)
+ change->txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
- if (using_subtxn)
- RollbackAndReleaseCurrentSubTransaction();
+ /*
+ * FIXME What to do about concurrent_abort for the transaction? Maybe
+ * should depend on transactional vs. non-transactional change.
+ *
+ * FIXME Also what to do about memory accounting? If not treated as
+ * regular changes, it's not part of memory accounting.
+ */
+ /* add it to the list in the global hash table (to the tail, to keep
+ * the right ordering of changes */
+ dlist_push_tail(&ent->changes, &change->node);
- PG_RE_THROW();
- }
- PG_END_TRY();
- }
+ MemoryContextSwitchTo(oldcontext);
}
/*
@@ -1816,9 +1740,6 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
&found);
Assert(found);
- /* Remove sequences created in this transaction (if any). */
- ReorderBufferSequenceCleanup(rb, txn->xid);
-
/* remove entries spilled to disk */
if (rbtxn_is_serialized(txn))
ReorderBufferRestoreCleanup(rb, txn);
@@ -2239,21 +2160,21 @@ ReorderBufferApplyMessage(ReorderBuffer *rb, ReorderBufferTXN *txn,
*/
static inline void
ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
- Relation relation, ReorderBufferChange *change,
+ Relation relation, ReorderBufferSequenceChange *change,
bool streaming)
{
HeapTuple tuple;
Form_pg_sequence_data seq;
- tuple = &change->data.sequence.tuple->tuple;
+ tuple = &change->tuple->tuple;
seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
/* Only ever called from ReorderBufferApplySequence, so transational. */
if (streaming)
- rb->stream_sequence(rb, txn, change->lsn, relation, true,
+ rb->stream_sequence(rb, txn, change->lsn, relation, change->transactional,
seq->last_value, seq->log_cnt, seq->is_called);
else
- rb->sequence(rb, txn, change->lsn, relation, true,
+ rb->sequence(rb, txn, change->lsn, relation, change->transactional,
seq->last_value, seq->log_cnt, seq->is_called);
}
@@ -2313,6 +2234,83 @@ ReorderBufferResetTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
}
}
+/*
+ * Walk all the accumulated sequence increments, and apply all changes between
+ * the start/end LSNs (e.g. between two decoded changes).
+ */
+static void
+ReorderBufferProcessSequences(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ Snapshot snapshot,
+ XLogRecPtr start_lsn, XLogRecPtr end_lsn,
+ bool streaming)
+{
+ HASH_SEQ_STATUS scan_status;
+ ReorderBufferSequenceEnt *ent;
+
+ /* walk changes for all sequences, apply relevant ones */
+ hash_seq_init(&scan_status, rb->sequences);
+ while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
+ {
+ dlist_mutable_iter iter;
+
+ /*
+ * Walk changes for the sequence, apply those that are non-transactional
+ * ones or transactional for this XID. Stop on the first transactional
+ * one for a different XID (XXX that should get a different relnode, so
+ * not sure it can even happen mid-list).
+ */
+ dlist_foreach_modify(iter, &ent->changes)
+ {
+ Oid reloid;
+ Relation relation;
+ ReorderBufferSequenceChange *change;
+
+ change = dlist_container(ReorderBufferSequenceChange, node, iter.cur);
+
+ /*
+ * ignore past/future changes
+ *
+ * FIXME not sure this is really correct
+ */
+ if (change->lsn < start_lsn || change->lsn > end_lsn)
+ break;
+
+ /* stop if transactional for a different XID */
+ if (ent->transactional && change->txn != txn)
+ break;
+
+ /* gonna apply the change, so remove from list */
+ dlist_delete(&change->node);
+
+ Assert(snapshot);
+
+ reloid = RelidByRelfilenode(change->relnode.spcNode,
+ change->relnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ relpathperm(change->relnode, MAIN_FORKNUM));
+
+ relation = RelationIdGetRelation(reloid);
+
+ if (!RelationIsValid(relation))
+ elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
+ reloid,
+ relpathperm(change->relnode, MAIN_FORKNUM));
+
+ if (RelationIsLogicallyLogged(relation))
+ ReorderBufferApplySequence(rb, change->txn, relation, change, streaming);
+
+ RelationClose(relation);
+ }
+
+ if (dlist_is_empty(&ent->changes))
+ (void) hash_search(rb->sequences,
+ (void *) &(ent->rnode),
+ HASH_REMOVE, NULL);
+ }
+}
+
/*
* Helper function for ReorderBufferReplay and ReorderBufferStreamTXN.
*
@@ -2419,6 +2417,13 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
SetupCheckXidLive(curtxn->xid);
}
+ /*
+ * Emit sequences up to this change (might change snapshot, and
+ * we need to do lookups by relfilenode)
+ */
+ ReorderBufferProcessSequences(rb, txn, snapshot_now,
+ txn->first_lsn, change->lsn, streaming);
+
switch (change->action)
{
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -2700,30 +2705,6 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
elog(ERROR, "tuplecid value in changequeue");
break;
- case REORDER_BUFFER_CHANGE_SEQUENCE:
- Assert(snapshot_now);
-
- reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
- change->data.sequence.relnode.relNode);
-
- if (reloid == InvalidOid)
- elog(ERROR, "could not map filenode \"%s\" to relation OID",
- relpathperm(change->data.sequence.relnode,
- MAIN_FORKNUM));
-
- relation = RelationIdGetRelation(reloid);
-
- if (!RelationIsValid(relation))
- elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
- reloid,
- relpathperm(change->data.sequence.relnode,
- MAIN_FORKNUM));
-
- if (RelationIsLogicallyLogged(relation))
- ReorderBufferApplySequence(rb, txn, relation, change, streaming);
-
- RelationClose(relation);
- break;
}
}
@@ -2734,6 +2715,10 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
ReorderBufferIterTXNFinish(rb, iterstate);
iterstate = NULL;
+ /* Replicate relevant changes for sequences. */
+ ReorderBufferProcessSequences(rb, txn, snapshot_now,
+ txn->first_lsn, commit_lsn, streaming);
+
/*
* Update total transaction count and total bytes processed by the
* transaction and its subtransactions. Ensure to not count the
@@ -4108,39 +4093,6 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
- break;
- }
- case REORDER_BUFFER_CHANGE_SEQUENCE:
- {
- char *data;
- ReorderBufferTupleBuf *tup;
- Size len = 0;
-
- tup = change->data.sequence.tuple;
-
- if (tup)
- {
- sz += sizeof(HeapTupleData);
- len = tup->tuple.t_len;
- sz += len;
- }
-
- /* make sure we have enough space */
- ReorderBufferSerializeReserve(rb, sz);
-
- data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
- /* might have been reallocated above */
- ondisk = (ReorderBufferDiskChange *) rb->outbuf;
-
- if (len)
- {
- memcpy(data, &tup->tuple, sizeof(HeapTupleData));
- data += sizeof(HeapTupleData);
-
- memcpy(data, tup->tuple.t_data, len);
- data += len;
- }
-
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4405,22 +4357,6 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
- break;
- }
- case REORDER_BUFFER_CHANGE_SEQUENCE:
- {
- ReorderBufferTupleBuf *tup;
- Size len = 0;
-
- tup = change->data.sequence.tuple;
-
- if (tup)
- {
- sz += sizeof(HeapTupleData);
- len = tup->tuple.t_len;
- sz += len;
- }
-
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
@@ -4723,29 +4659,6 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
- case REORDER_BUFFER_CHANGE_SEQUENCE:
- if (change->data.sequence.tuple)
- {
- uint32 tuplelen = ((HeapTuple) data)->t_len;
-
- change->data.sequence.tuple =
- ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
-
- /* restore ->tuple */
- memcpy(&change->data.sequence.tuple->tuple, data,
- sizeof(HeapTupleData));
- data += sizeof(HeapTupleData);
-
- /* reset t_data pointer into the new tuplebuf */
- change->data.sequence.tuple->tuple.t_data =
- ReorderBufferTupleBufData(change->data.sequence.tuple);
-
- /* restore tuple data itself */
- memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
- data += tuplelen;
- }
- break;
-
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index 0bcc150b331..d94bce8d39f 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -64,8 +64,7 @@ typedef enum ReorderBufferChangeType
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT,
- REORDER_BUFFER_CHANGE_TRUNCATE,
- REORDER_BUFFER_CHANGE_SEQUENCE
+ REORDER_BUFFER_CHANGE_TRUNCATE
} ReorderBufferChangeType;
/* forward declaration */
@@ -159,13 +158,6 @@ typedef struct ReorderBufferChange
uint32 ninvalidations; /* Number of messages */
SharedInvalidationMessage *invalidations; /* invalidation message */
} inval;
-
- /* Context data for Sequence changes */
- struct
- {
- RelFileNode relnode;
- ReorderBufferTupleBuf *tuple;
- } sequence;
} data;
/*
@@ -175,6 +167,24 @@ typedef struct ReorderBufferChange
dlist_node node;
} ReorderBufferChange;
+typedef struct ReorderBufferSequenceChange
+{
+ XLogRecPtr lsn;
+
+ /* Transaction this change belongs to. */
+ struct ReorderBufferTXN *txn;
+
+ RepOriginId origin_id;
+
+ /* sequence data */
+ bool transactional;
+ RelFileNode relnode;
+ ReorderBufferTupleBuf *tuple;
+
+ /* list of sequence changes */
+ dlist_node node;
+} ReorderBufferSequenceChange;
+
/* ReorderBufferTXN txn_flags */
#define RBTXN_HAS_CATALOG_CHANGES 0x0001
#define RBTXN_IS_SUBXACT 0x0002
@@ -615,6 +625,7 @@ struct ReorderBuffer
* Memory contexts for specific types objects
*/
MemoryContext change_context;
+ MemoryContext sequence_context;
MemoryContext txn_context;
MemoryContext tup_context;
@@ -670,8 +681,8 @@ void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapsho
bool transactional, const char *prefix,
Size message_size, const char *message);
void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
- Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
- RelFileNode rnode, bool transactional, bool created,
+ XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool created,
ReorderBufferTupleBuf *tuplebuf);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
@@ -720,7 +731,4 @@ void ReorderBufferSetRestartPoint(ReorderBuffer *, XLogRecPtr ptr);
void StartupReorderBuffer(void);
-bool ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
- RelFileNode rnode, bool created);
-
#endif
--
2.34.1
vchanges-0001-WIP-sequence-reworks.patchtext/x-patch; charset=UTF-8; name=vchanges-0001-WIP-sequence-reworks.patchDownload
From 294faa547cf8347881831d2137b22b6c25ffa445 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Sat, 2 Apr 2022 20:44:15 +0200
Subject: [PATCH vchanges 1/2] WIP: sequence reworks
---
src/backend/replication/logical/decode.c | 32 +--
.../replication/logical/reorderbuffer.c | 236 ++++++------------
src/include/replication/reorderbuffer.h | 8 +-
3 files changed, 92 insertions(+), 184 deletions(-)
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 77bc7aea7a0..bd78a122b03 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -1313,9 +1313,7 @@ sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
TransactionId xid = XLogRecGetXid(r);
uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
xl_seq_rec *xlrec;
- Snapshot snapshot;
RepOriginId origin_id = XLogRecGetOrigin(r);
- bool transactional;
/* only decode changes flagged with XLOG_SEQ_LOG */
if (info != XLOG_SEQ_LOG)
@@ -1352,32 +1350,14 @@ sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
return;
tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
- DecodeSeqTuple(tupledata, datalen, tuplebuf);
- /*
- * Should we handle the sequence increment as transactional or not?
- *
- * If the sequence was created in a still-running transaction, treat
- * it as transactional and queue the increments. Otherwise it needs
- * to be treated as non-transactional, in which case we send it to
- * the plugin right away.
- */
- transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
- target_node,
- xlrec->created);
-
- /* Skip the change if already processed (per the snapshot). */
- if (transactional &&
- !SnapBuildProcessChange(builder, xid, buf->origptr))
- return;
- else if (!transactional &&
- (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
- SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ if (!SnapBuildProcessChange(builder, xid, buf->origptr))
return;
- /* Queue the increment (or send immediately if not transactional). */
- snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
- ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
- origin_id, target_node, transactional,
+ DecodeSeqTuple(tupledata, datalen, tuplebuf);
+
+ /* queue the sequence increment */
+ ReorderBufferQueueSequence(ctx->reorder, xid, buf->endptr,
+ origin_id, target_node,
xlrec->created, tuplebuf);
}
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index c2d9be81fae..0c4fb9cb561 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -155,6 +155,7 @@ typedef struct ReorderBufferTXNByIdEnt
typedef struct ReorderBufferSequenceEnt
{
RelFileNode rnode;
+ XLogRecPtr lsn;
TransactionId xid;
} ReorderBufferSequenceEnt;
@@ -923,21 +924,26 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
* so we simply do a lookup (the sequence is identified by relfilende). If
* we find a match, the increment should be handled as transactional.
*/
-bool
+static bool
ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
RelFileNode rnode, bool created)
{
bool found = false;
+ ReorderBufferSequenceEnt *ent;
if (created)
return true;
- hash_search(rb->sequences,
- (void *) &rnode,
- HASH_FIND,
- &found);
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_FIND,
+ &found);
+
+ /* if not found, it's definitely non-transactional */
+ if (!found)
+ return false;
- return found;
+ return (ent->xid != InvalidTransactionId);
}
/*
@@ -958,12 +964,10 @@ ReorderBufferSequenceCleanup(ReorderBuffer *rb, TransactionId xid)
while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
{
/* skip sequences not from this transaction */
- if (ent->xid != xid)
- continue;
+ if (ent->xid == xid)
+ ent->xid = InvalidTransactionId;
- (void) hash_search(rb->sequences,
- (void *) &(ent->rnode),
- HASH_REMOVE, NULL);
+ /* FIXME maybe we could remove the entries at some point */
}
}
@@ -978,166 +982,90 @@ ReorderBufferSequenceCleanup(ReorderBuffer *rb, TransactionId xid)
*/
void
ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
- Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
- RelFileNode rnode, bool transactional, bool created,
+ XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool created,
ReorderBufferTupleBuf *tuplebuf)
{
- /*
- * Change needs to be handled as transactional, because the sequence was
- * created in a transaction that is still running. In that case all the
- * changes need to be queued in that transaction, we must not send them
- * to the downstream until the transaction commits.
- *
- * There's a bit of a trouble with subtransactions - we can't queue it
- * into the subxact, because it might be rolled back and we'd lose the
- * increment. We need to queue it into the same (sub)xact that created
- * the sequence, which is why we track the XID in the hash table.
- */
- if (transactional)
- {
- MemoryContext oldcontext;
- ReorderBufferChange *change;
-
- /* lookup sequence by relfilenode */
- ReorderBufferSequenceEnt *ent;
- bool found;
-
- /* transactional changes require a transaction */
- Assert(xid != InvalidTransactionId);
-
- /* search the lookup table (we ignore the return value, found is enough) */
- ent = hash_search(rb->sequences,
- (void *) &rnode,
- created ? HASH_ENTER : HASH_FIND,
- &found);
-
- /*
- * If this is the "create" increment, we must not have found any
- * pre-existing entry in the hash table (i.e. there must not be
- * any conflicting sequence).
- */
- Assert(!(created && found));
+ bool transactional;
+ MemoryContext oldcontext;
+ ReorderBufferChange *change;
- /* But we must have either created or found an existing entry. */
- Assert(created || found);
+ /* lookup sequence by relfilenode */
+ ReorderBufferSequenceEnt *ent;
+ bool found;
- /*
- * When creating the sequence, remember the XID of the transaction
- * that created id.
- */
- if (created)
- ent->xid = xid;
-
- /* XXX Maybe check that we're still in the same top-level xact? */
+ transactional = ReorderBufferSequenceIsTransactional(rb, rnode, created);
- /* OK, allocate and queue the change */
- oldcontext = MemoryContextSwitchTo(rb->context);
-
- change = ReorderBufferGetChange(rb);
+ /*
+ * Maintenance of the sequences hash, so that we can distinguish which
+ * additional increments are transactional and non-transactional.
+ */
- change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
- change->origin_id = origin_id;
+ /* transactional changes require a transaction */
+ Assert(!(transactional && (xid == InvalidTransactionId)));
- memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
+ /* search the lookup table */
+ ent = hash_search(rb->sequences,
+ (void *) &rnode,
+ HASH_ENTER,
+ &found);
- change->data.sequence.tuple = tuplebuf;
+ /*
+ * If this is the "create" increment, we must not have found any entry in
+ * the hash table (i.e. there must not be any conflicting sequence with the
+ * same relfilenode).
+ */
+ Assert(!(created && found));
- /* add it to the same subxact that created the sequence */
- ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
+ /*
+ * It's either non-transactioanl change, creation of a relfilenode, or we
+ * should have found an existing entry.
+ */
+ Assert(!transactional || created || found);
- MemoryContextSwitchTo(oldcontext);
- }
+ /*
+ * When creating the sequence, remember the XID of the transaction
+ * that created id.
+ */
+ if (transactional)
+ ent->xid = xid;
else
- {
- /*
- * This increment is for a sequence that was not created in any
- * running transaction, so we treat it as non-transactional and
- * just send it to the output plugin directly.
- */
- ReorderBufferTXN *txn = NULL;
- volatile Snapshot snapshot_now = snapshot;
- bool using_subtxn;
-
-#ifdef USE_ASSERT_CHECKING
- /* All "creates" have to be handled as transactional. */
- Assert(!created);
-
- /* Make sure the sequence is not in the hash table. */
- {
- bool found;
- hash_search(rb->sequences,
- (void *) &rnode,
- HASH_FIND, &found);
- Assert(!found);
- }
-#endif
-
- if (xid != InvalidTransactionId)
- txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
-
- /* setup snapshot to allow catalog access */
- SetupHistoricSnapshot(snapshot_now, NULL);
-
- /*
- * Decoding needs access to syscaches et al., which in turn use
- * heavyweight locks and such. Thus we need to have enough state around to
- * keep track of those. The easiest way is to simply use a transaction
- * internally. That also allows us to easily enforce that nothing writes
- * to the database by checking for xid assignments.
- *
- * When we're called via the SQL SRF there's already a transaction
- * started, so start an explicit subtransaction there.
- */
- using_subtxn = IsTransactionOrTransactionBlock();
-
- PG_TRY();
- {
- Relation relation;
- HeapTuple tuple;
- Form_pg_sequence_data seq;
- Oid reloid;
-
- if (using_subtxn)
- BeginInternalSubTransaction("sequence");
- else
- StartTransactionCommand();
-
- reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
-
- if (reloid == InvalidOid)
- elog(ERROR, "could not map filenode \"%s\" to relation OID",
- relpathperm(rnode,
- MAIN_FORKNUM));
+ ent->xid = InvalidTransactionId;
- relation = RelationIdGetRelation(reloid);
- tuple = &tuplebuf->tuple;
- seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ /*
+ * Queue the sequence change, into a global list.
+ *
+ * There's a bit of a trouble with subtransactions - we can't queue it
+ * into the subxact, because it might be rolled back and we'd lose the
+ * increment. We need to queue it into the same (sub)xact that created
+ * the sequence, which is why we track the XID in the hash table.
+ */
+ oldcontext = MemoryContextSwitchTo(rb->context);
- rb->sequence(rb, txn, lsn, relation, transactional,
- seq->last_value, seq->log_cnt, seq->is_called);
+ change = ReorderBufferGetChange(rb);
- RelationClose(relation);
+ change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
+ change->origin_id = origin_id;
- TeardownHistoricSnapshot(false);
+ memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
- AbortCurrentTransaction();
+ change->data.sequence.tuple = tuplebuf;
+ change->data.sequence.transactional = transactional;
- if (using_subtxn)
- RollbackAndReleaseCurrentSubTransaction();
- }
- PG_CATCH();
- {
- TeardownHistoricSnapshot(true);
+ Assert(InvalidXLogRecPtr != lsn);
+ change->lsn = lsn;
- AbortCurrentTransaction();
+ ReorderBufferQueueChange(rb, xid, lsn, change, false);
- if (using_subtxn)
- RollbackAndReleaseCurrentSubTransaction();
+ /*
+ * FIXME What to do about concurrent_abort for the transaction? Maybe
+ * should depend on transactional vs. non-transactional change.
+ *
+ * FIXME Also what to do about memory accounting? If not treated as
+ * regular changes, it's not part of memory accounting.
+ */
- PG_RE_THROW();
- }
- PG_END_TRY();
- }
+ MemoryContextSwitchTo(oldcontext);
}
/*
@@ -2250,10 +2178,12 @@ ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
/* Only ever called from ReorderBufferApplySequence, so transational. */
if (streaming)
- rb->stream_sequence(rb, txn, change->lsn, relation, true,
+ rb->stream_sequence(rb, txn, change->lsn, relation,
+ change->data.sequence.transactional,
seq->last_value, seq->log_cnt, seq->is_called);
else
- rb->sequence(rb, txn, change->lsn, relation, true,
+ rb->sequence(rb, txn, change->lsn, relation,
+ change->data.sequence.transactional,
seq->last_value, seq->log_cnt, seq->is_called);
}
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index 0bcc150b331..6e939e27a15 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -165,6 +165,7 @@ typedef struct ReorderBufferChange
{
RelFileNode relnode;
ReorderBufferTupleBuf *tuple;
+ bool transactional;
} sequence;
} data;
@@ -670,8 +671,8 @@ void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapsho
bool transactional, const char *prefix,
Size message_size, const char *message);
void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
- Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
- RelFileNode rnode, bool transactional, bool created,
+ XLogRecPtr lsn, RepOriginId origin_id,
+ RelFileNode rnode, bool created,
ReorderBufferTupleBuf *tuplebuf);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
@@ -720,7 +721,4 @@ void ReorderBufferSetRestartPoint(ReorderBuffer *, XLogRecPtr ptr);
void StartupReorderBuffer(void);
-bool ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
- RelFileNode rnode, bool created);
-
#endif
--
2.34.1
On Sat, Apr 2, 2022 at 8:52 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
On 4/2/22 12:35, Amit Kapila wrote:
On Fri, Apr 1, 2022 at 8:32 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:On 3/28/22 07:29, Amit Kapila wrote:
I thought about changing snapshot dealing of
non-transactional sequence changes similar to transactional ones but
that also won't work because it is only at commit we decide whether we
can send the changes.I wonder if there's some earlier LSN (similar to the consistent point)
which might be useful for this.Or maybe we should queue even the non-transactional changes, not
per-transaction but in a global list, and then at each commit either
discard inspect them (at that point we know the lowest LSN for all
transactions and the consistent point). Seems complex, though.I couldn't follow '..discard inspect them ..'. Do you mean we inspect
them and discard whichever are not required? It seems here we are
talking about a new global ReorderBufferGlobal instead of
ReorderBufferTXN to collect these changes but we don't need only
consistent point LSN because we do send if the commit of containing
transaction is after consistent point LSN, so we need some transaction
information as well. I think it could bring new challenges.Sorry for the gibberish. Yes, I meant to discard sequence changes that
are no longer needed, due to being "obsoleted" by the applied change. We
must not apply "older" changes (using LSN) because that would make the
sequence go backwards.
It's not related to this issue but I think that non-transactional
sequence changes could be resent in case of the subscriber crashes
because it doesn’t update replication origin LSN, is that right? If
so, while resending the sequence changes, the sequence value on the
subscriber can temporarily go backward.
I'm not entirely sure whether the list of changes should be kept in TXN
or in the global reorderbuffer object - we need to track which TXN the
change belongs to (because of transactional changes) but we also need to
discard the unnecessary changes efficiently (and walking TXN might be
expensive).But yes, I'm sure there will be challenges. One being that tracking just
the decoded WAL stuff is not enough, because nextval() may not generate
WAL. But we still need to make sure the increment is replicated.What I think we might do is this:
- add a global list of decoded sequence increments to ReorderBuffer
- at each commit/abort walk the list, walk the list and consider all
increments up to the commit LSN that "match" (non-transactional match
all TXNs, transactional match only the current TXN)- replicate the last "matching" status for each sequence, discard the
processed onesWe could probably optimize this by not tracking every single increment,
but merge them "per transaction", I think.I'm sure this description is pretty rough and will need refining, handle
various corner-cases etc.For the transactional case, as we are considering the create sequence
operation as transactional, we would unnecessarily queue them even
though that is not required. Basically, they don't need to be
considered transactional and we can simply ignore such messages like
other DDLs. But for that probably we need to distinguish Alter/Create
case which may or may not be straightforward. Now, queuing them is
probably harmless unless it causes the transaction to spill/stream.I'm not sure I follow. Why would we queue them unnecessarily?
Also, there's the bug with decoding changes in transactions that create
the sequence and add it to a publication. I think the agreement was that
this behavior was incorrect, we should not decode changes until the
subscription is refreshed. Doesn't that mean can't be any CREATE case,
just ALTER?Yeah, but how will we distinguish them. Aren't they using the same
kind of WAL record?Same WAL record, but the "created" flag which should distinguish these
two cases, IIRC.
Since the "created" flag indicates that we created a new relfilenode
so it's true when both CREATE and ALTER.
Regards,
--
Masahiko Sawada
EDB: https://www.enterprisedb.com/
On Mon, Apr 4, 2022 at 11:42 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
On Sat, Apr 2, 2022 at 8:52 PM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:It's not related to this issue but I think that non-transactional
sequence changes could be resent in case of the subscriber crashes
because it doesn’t update replication origin LSN, is that right? If
so, while resending the sequence changes, the sequence value on the
subscriber can temporarily go backward.
Yes, this can happen for the non-transactional sequence changes though
this is a different problem than what is happening on the decoding
side.
Also, there's the bug with decoding changes in transactions that create
the sequence and add it to a publication. I think the agreement was that
this behavior was incorrect, we should not decode changes until the
subscription is refreshed. Doesn't that mean can't be any CREATE case,
just ALTER?Yeah, but how will we distinguish them. Aren't they using the same
kind of WAL record?Same WAL record, but the "created" flag which should distinguish these
two cases, IIRC.Since the "created" flag indicates that we created a new relfilenode
so it's true when both CREATE and ALTER.
Yes, this is my understanding as well. So, we need something else.
--
With Regards,
Amit Kapila.
On Sat, Apr 2, 2022 at 5:47 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
On 4/1/22 17:02, Tomas Vondra wrote:
So, I investigated this a bit more, and I wrote a couple test_decoding
isolation tests (patch attached) demonstrating the issue. Actually, I
should say "issues" because it's a bit worse than you described ...The whole problem is in this chunk of code in sequence_decode():
/* Skip the change if already processed (per the snapshot). */
if (transactional &&
!SnapBuildProcessChange(builder, xid, buf->origptr))
return;
else if (!transactional &&
(SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
SnapBuildXactNeedsSkip(builder, buf->origptr)))
return;/* Queue the increment (or send immediately if not transactional). */
snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
origin_id, target_node, transactional,
xlrec->created, tuplebuf);With the script you described, the increment is non-transactional, so we
end up in the second branch, return and thus discard the increment.But it's also possible the change is transactional, which can only
trigger the first branch. But it does not, so we start building the
snapshot. But the first thing SnapBuildGetOrBuildSnapshot does isAssert(builder->state == SNAPBUILD_CONSISTENT);
and we're still not in a consistent snapshot, so it just crashes and
burn :-(The sequences.spec file has two definitions of s2restart step, one empty
(resulting in non-transactional change), one with ALTER SEQUENCE (which
means the change will be transactional).The really "funny" thing is this is not new code - this is an exact copy
from logicalmsg_decode(), and logical messages have all those issues
too. We may discard some messages, trigger the same Assert, etc. There's
a messages2.spec demonstrating this (s2message step defines whether the
message is transactional or not).
It seems to me that the Assert in SnapBuildGetOrBuildSnapshot() is
wrong. It is required only for non-transactional logical messages. For
transactional message(s), we decide at the commit time whether the
snapshot has reached a consistent state and then decide whether to
skip the entire transaction or not. So, the possible fix for Assert
could be that we pass an additional parameter 'transactional' to
SnapBuildGetOrBuildSnapshot() and then assert only when it is false. I
have also checked the development thread for this work and it appears
to be introduced for non-transactional cases only. See email[1]/messages/by-id/56D4B3AD.5000207@2ndquadrant.com, this
new function and Assert was for the non-transactional case and later
while rearranging the code, this problem got introduced.
Now, for the non-transactional cases, I am not sure if there is a
one-to-one mapping with the sequence case. The way sequences are dealt
with on the subscriber-side (first we copy initial data and then
replicate the incremental changes) appears more as we deal with the
table and its incremental changes. There is some commonality with
non-transactional messages w.r.t the case where we want sequence
changes to be sent even on rollbacks unless some DDL has happened for
them but if we see the overall solution it doesn't appear that we can
use it similar to messages. I think this is the reason we are facing
the other problems w.r.t to syncing sequences to subscribers including
the problem reported by Sawada-San yesterday.
Now, the particular case where we won't send a non-transactional
logical message unless the snapshot is consistent could be considered
as its behavior and may be documented better. I am not very sure about
this as there is no example of the way sync for these messages happens
in the core but if someone outside the core wants a different behavior
and presents the case then we can probably try to enhance it. I feel
the same is not true for sequences because it can cause the replica
(subscriber) to go out of sync with the master (publisher).
So I guess we need to fix both places, perhaps in a similar way.
It depends but I think for logical messages we should do the minimal
fix required for Asserts and probably document the current behavior
bit better unless we think there is a case to make it behave similar
to sequences.
[1]: /messages/by-id/56D4B3AD.5000207@2ndquadrant.com
--
With Regards,
Amit Kapila.
On Mon, Apr 4, 2022 at 3:10 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
I did some experiments over the weekend, exploring how to rework the
sequence decoding in various ways. Let me share some WIP patches,
hopefully that can be useful for trying more stuff and moving this
discussion forward.I tried two things - (1) accumulating sequence increments in global
array and then doing something with it, and (2) treating all sequence
increments as regular changes (in a TXN) and then doing something
special during the replay. Attached are two patchsets, one for each
approach.Note: It's important to remember decoding of sequences is not the only
code affected by this. The logical messages have the same issue,
certainly when it comes to transactional vs. non-transactional stuff and
handling of snapshots. Even if the sequence decoding ends up being
reverted, we still need to fix that, somehow. And my feeling is the
solutions ought to be pretty similar in both cases.Now, regarding the two approaches:
(1) accumulating sequences in global hash table
The main problem with regular sequence increments is that those need to
be non-transactional - a transaction may use a sequence without any
WAL-logging, if the WAL was written by an earlier transaction. The
problem is the earlier trasaction might have been rolled back, and thus
simply discarded by the logical decoding. But we still need to apply
that, in order not to lose the sequence increment.The current code just applies those non-transactional increments right
after decoding the increment, but that does not work because we may not
have a snapshot at that point. And we only have the snapshot when within
a transaction (AFAICS) so this queues all changes and then applies the
changes later.The changes need to be shared by all transactions, so queueing them in a
global works fairly well - otherwise we'd have to walk all transactions,
in order to see if there are relevant sequence increments.But some increments may be transactional, e.g. when the sequence is
created or altered in a transaction. To allow tracking this, this uses a
hash table, with relfilenode as a key.There's a couple issues with this, though. Firstly, stashing the changes
outside transactions, it's not included in memory accounting, it's not
spilled to disk or streamed, etc. I guess fixing this is possible, but
it's certainly not straightforward, because we mix increments from many
different transactions.A bigger issue is that I'm not sure this actually handles the snapshots
correctly either.The non-transactional increments affect all transactions, so when
ReorderBufferProcessSequences gets executed, it processes all of them,
no matter the source transaction. Can we be sure the snapshot in the
applying transaction is the same (or "compatible") as the snapshot in
the source transaction?
I don't think we can assume that. I think it is possible that some
other transaction's WAL can be in-between start/end lsn of txn (which
we decide to send) which may not finally reach a consistent state.
Consider a case similar to shown in one of my previous emails:
Session-2:
Begin;
SELECT pg_current_xact_id();
Session-1:
SELECT 'init' FROM pg_create_logical_replication_slot('test_slot',
'test_decoding', false, true);
Session-3:
Begin;
SELECT pg_current_xact_id();
Session-2:
Commit;
Begin;
INSERT INTO t1_seq SELECT nextval('seq1') FROM generate_series(1,100);
Session-3:
Commit;
Session-2:
Commit;
Here, we send changes (say insert from txn 700) from session-2 because
session-3's commit happens before it. Now, consider another
transaction parallel to txn 700 which generates some WAL related to
sequences but it committed before session-3's commit. So though, its
changes will be the in-between start/end LSN of txn 700 but those
shouldn't be sent.
I have not tried this and also this may be solvable in some way but I
think processing changes from other TXNs sounds risky to me in terms
of snapshot handling.
(2) treating sequence change as regular changes
This adopts a different approach - instead of accumulating the sequence
increments in a global hash table, it treats them as regular changes.
Which solves the snapshot issue, and issues with spilling to disk,
streaming and so on.But it has various other issues with handling concurrent transactions,
unfortunately, which probably make this approach infeasible:* The non-transactional stuff has to be applied in the first transaction
that commits, not in the transaction that generated the WAL. That does
not work too well with this approach, because we have to walk changes in
all other transactions.
Why do you want to traverse other TXNs in this approach? Is it because
the current TXN might be using some value of sequence which has been
actually WAL logged in the other transaction but that other
transaction has not been sent yet? I think if we don't send that then
probably replica sequences columns (in some tables) have some values
but actually the sequence itself won't have still that value which
sounds problematic. Is that correct?
* Another serious issue seems to be streaming - if we already streamed
some of the changes, we can't iterate through them anymore.Also, having to walk the transactions over and over for each change, to
apply relevant sequence increments, that's mighty expensive. The other
approach needs to do that too, but walking the global hash table seems
much cheaper.The other issue this handling of aborted transactions - we need to apply
sequence increments even from those transactions, of course. The other
approach has this issue too, though.(3) tracking sequences touched by transaction
This is the approach proposed by Hannu Krosing. I haven't explored this
again yet, but I recall I wrote a PoC patch a couple months back.It seems to me most of the problems stems from trying to derive sequence
state from decoded WAL changes, which is problematic because of the
non-transactional nature of sequences (i.e. WAL for one transaction
affects other transactions in non-obvious ways). And this approach
simply works around that entirely - instead of trying to deduce the
sequence state from WAL, we'd make sure to write the current sequence
state (or maybe just ID of the sequence) at commit time. Which should
eliminate most of the complexity / problems, I think.
That sounds promising but I haven't thought in detail about that approach.
I'm not really sure what to do about this. All of those reworks seems
like an extensive redesign of the patch, and considering the last CF is
already over ... not great.
Yeah, I share the same feeling that even if we devise solutions to all
the known problems it requires quite some time to ensure everything is
correct.
--
With Regards,
Amit Kapila.
On 4/5/22 12:06, Amit Kapila wrote:
On Mon, Apr 4, 2022 at 3:10 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:I did some experiments over the weekend, exploring how to rework the
sequence decoding in various ways. Let me share some WIP patches,
hopefully that can be useful for trying more stuff and moving this
discussion forward.I tried two things - (1) accumulating sequence increments in global
array and then doing something with it, and (2) treating all sequence
increments as regular changes (in a TXN) and then doing something
special during the replay. Attached are two patchsets, one for each
approach.Note: It's important to remember decoding of sequences is not the only
code affected by this. The logical messages have the same issue,
certainly when it comes to transactional vs. non-transactional stuff and
handling of snapshots. Even if the sequence decoding ends up being
reverted, we still need to fix that, somehow. And my feeling is the
solutions ought to be pretty similar in both cases.Now, regarding the two approaches:
(1) accumulating sequences in global hash table
The main problem with regular sequence increments is that those need to
be non-transactional - a transaction may use a sequence without any
WAL-logging, if the WAL was written by an earlier transaction. The
problem is the earlier trasaction might have been rolled back, and thus
simply discarded by the logical decoding. But we still need to apply
that, in order not to lose the sequence increment.The current code just applies those non-transactional increments right
after decoding the increment, but that does not work because we may not
have a snapshot at that point. And we only have the snapshot when within
a transaction (AFAICS) so this queues all changes and then applies the
changes later.The changes need to be shared by all transactions, so queueing them in a
global works fairly well - otherwise we'd have to walk all transactions,
in order to see if there are relevant sequence increments.But some increments may be transactional, e.g. when the sequence is
created or altered in a transaction. To allow tracking this, this uses a
hash table, with relfilenode as a key.There's a couple issues with this, though. Firstly, stashing the changes
outside transactions, it's not included in memory accounting, it's not
spilled to disk or streamed, etc. I guess fixing this is possible, but
it's certainly not straightforward, because we mix increments from many
different transactions.A bigger issue is that I'm not sure this actually handles the snapshots
correctly either.The non-transactional increments affect all transactions, so when
ReorderBufferProcessSequences gets executed, it processes all of them,
no matter the source transaction. Can we be sure the snapshot in the
applying transaction is the same (or "compatible") as the snapshot in
the source transaction?I don't think we can assume that. I think it is possible that some
other transaction's WAL can be in-between start/end lsn of txn (which
we decide to send) which may not finally reach a consistent state.
Consider a case similar to shown in one of my previous emails:
Session-2:
Begin;
SELECT pg_current_xact_id();Session-1:
SELECT 'init' FROM pg_create_logical_replication_slot('test_slot',
'test_decoding', false, true);Session-3:
Begin;
SELECT pg_current_xact_id();Session-2:
Commit;
Begin;
INSERT INTO t1_seq SELECT nextval('seq1') FROM generate_series(1,100);Session-3:
Commit;Session-2:
Commit;Here, we send changes (say insert from txn 700) from session-2 because
session-3's commit happens before it. Now, consider another
transaction parallel to txn 700 which generates some WAL related to
sequences but it committed before session-3's commit. So though, its
changes will be the in-between start/end LSN of txn 700 but those
shouldn't be sent.I have not tried this and also this may be solvable in some way but I
think processing changes from other TXNs sounds risky to me in terms
of snapshot handling.
Yes, I know this can happen. I was only really thinking about what might
happen to the relfilenode of the sequence itself - and I don't think any
concurrent transaction could swoop in and change the relfilenode in any
meaningful way, due to locking.
But of course, if we expect/require to have a perfect snapshot for that
exact position in the transaction, this won't work. IMO the whole idea
that we can have non-transactional bits in naturally transactional
decoding seems a bit suspicious (at least in hindsight).
No matter what we do for sequences, though, this still affects logical
messages too. Not sure what to do there :-(
(2) treating sequence change as regular changes
This adopts a different approach - instead of accumulating the sequence
increments in a global hash table, it treats them as regular changes.
Which solves the snapshot issue, and issues with spilling to disk,
streaming and so on.But it has various other issues with handling concurrent transactions,
unfortunately, which probably make this approach infeasible:* The non-transactional stuff has to be applied in the first transaction
that commits, not in the transaction that generated the WAL. That does
not work too well with this approach, because we have to walk changes in
all other transactions.Why do you want to traverse other TXNs in this approach? Is it because
the current TXN might be using some value of sequence which has been
actually WAL logged in the other transaction but that other
transaction has not been sent yet? I think if we don't send that then
probably replica sequences columns (in some tables) have some values
but actually the sequence itself won't have still that value which
sounds problematic. Is that correct?
Well, how else would you get to sequence changes in the other TXNs?
Consider this:
T1: begin
T2: begin
T2: nextval('s') -> writes WAL for 32 values
T1: nextval('s') -> gets value without WAL
T1: commit
T2: commit
Now, if we commit T1 without "applying" the sequence change from T2, we
loose the sequence state. But we still write/replicate the value
generated from the sequence.
* Another serious issue seems to be streaming - if we already streamed
some of the changes, we can't iterate through them anymore.Also, having to walk the transactions over and over for each change, to
apply relevant sequence increments, that's mighty expensive. The other
approach needs to do that too, but walking the global hash table seems
much cheaper.The other issue this handling of aborted transactions - we need to apply
sequence increments even from those transactions, of course. The other
approach has this issue too, though.(3) tracking sequences touched by transaction
This is the approach proposed by Hannu Krosing. I haven't explored this
again yet, but I recall I wrote a PoC patch a couple months back.It seems to me most of the problems stems from trying to derive sequence
state from decoded WAL changes, which is problematic because of the
non-transactional nature of sequences (i.e. WAL for one transaction
affects other transactions in non-obvious ways). And this approach
simply works around that entirely - instead of trying to deduce the
sequence state from WAL, we'd make sure to write the current sequence
state (or maybe just ID of the sequence) at commit time. Which should
eliminate most of the complexity / problems, I think.That sounds promising but I haven't thought in detail about that approach.
So, here's a patch doing that. It's a reworked/improved version of the
patch [1]/messages/by-id/2cd38bab-c874-8e0b-98e7-d9abaaf9806a@enterprisedb.com shared in November.
It seems to be working pretty nicely. The behavior is a little bit
different, of course, because we only replicate "committed" changes, so
if you do nextval() in aborted transaction that is not replicated. Which
I think is fine, because we generally make no durability guarantees for
aborted transactions in general.
But there are a couple issues too:
1) locking
We have to read sequence change before the commit, but we must not allow
reordering (because then the state might go backwards again). I'm not
sure how serious impact could this have on performance.
2) dropped sequences
I'm not sure what to do about sequences dropped in the transaction. The
patch simply attempts to read the current sequence state before the
commit, but if the sequence was dropped (in that transaction), that
can't happen. I'm not sure if that's OK or not.
3) WAL record
To replicate the stuff the patch uses a LogicalMessage, but I guess a
separate WAL record would be better. But that's a technical detail.
regards
[1]: /messages/by-id/2cd38bab-c874-8e0b-98e7-d9abaaf9806a@enterprisedb.com
/messages/by-id/2cd38bab-c874-8e0b-98e7-d9abaaf9806a@enterprisedb.com
I'm not really sure what to do about this. All of those reworks seems
like an extensive redesign of the patch, and considering the last CF is
already over ... not great.Yeah, I share the same feeling that even if we devise solutions to all
the known problems it requires quite some time to ensure everything is
correct.
True. Let's keep working on this for a bit more time and then we can
decide what to do.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
0001-rework-sequence-decoding.patchtext/x-patch; charset=UTF-8; name=0001-rework-sequence-decoding.patchDownload
From 0ba154eef933a9b39bb7eaa0e339620a2f98a37b Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tomas.vondra@postgresql.org>
Date: Tue, 5 Apr 2022 00:25:02 +0200
Subject: [PATCH] rework sequence decoding
---
contrib/test_decoding/test_decoding.c | 24 +-
src/backend/access/rmgrdesc/logicalmsgdesc.c | 13 +
src/backend/access/transam/xact.c | 25 ++
src/backend/commands/sequence.c | 263 +++++++-----
src/backend/replication/logical/decode.c | 112 ++---
src/backend/replication/logical/logical.c | 9 +-
src/backend/replication/logical/message.c | 3 +-
src/backend/replication/logical/proto.c | 4 +-
.../replication/logical/reorderbuffer.c | 387 ++----------------
src/backend/replication/logical/tablesync.c | 3 +-
src/backend/replication/logical/worker.c | 16 +-
src/backend/replication/pgoutput/pgoutput.c | 21 +-
src/include/access/rmgrlist.h | 2 +-
src/include/commands/sequence.h | 5 +-
src/include/replication/logicalproto.h | 2 -
src/include/replication/message.h | 15 +
src/include/replication/output_plugin.h | 2 -
src/include/replication/reorderbuffer.h | 16 +-
18 files changed, 330 insertions(+), 592 deletions(-)
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index c7a87f5fe5b..851117b4155 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -79,7 +79,7 @@ static void pg_decode_message(LogicalDecodingContext *ctx,
Size sz, const char *message);
static void pg_decode_sequence(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
- Relation rel, bool transactional,
+ Relation rel,
int64 last_value, int64 log_cnt, bool is_called);
static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
TransactionId xid,
@@ -123,7 +123,7 @@ static void pg_decode_stream_message(LogicalDecodingContext *ctx,
Size sz, const char *message);
static void pg_decode_stream_sequence(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
- Relation rel, bool transactional,
+ Relation rel,
int64 last_value, int64 log_cnt, bool is_called);
static void pg_decode_stream_truncate(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
@@ -782,7 +782,6 @@ pg_decode_message(LogicalDecodingContext *ctx,
static void
pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
XLogRecPtr sequence_lsn, Relation rel,
- bool transactional,
int64 last_value, int64 log_cnt, bool is_called)
{
TestDecodingData *data = ctx->output_plugin_private;
@@ -792,22 +791,19 @@ pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
return;
/* output BEGIN if we haven't yet, but only for the transactional case */
- if (transactional)
+ if (data->skip_empty_xacts && !txndata->xact_wrote_changes)
{
- if (data->skip_empty_xacts && !txndata->xact_wrote_changes)
- {
- pg_output_begin(ctx, data, txn, false);
- }
- txndata->xact_wrote_changes = true;
+ pg_output_begin(ctx, data, txn, false);
}
+ txndata->xact_wrote_changes = true;
OutputPluginPrepareWrite(ctx, true);
appendStringInfoString(ctx->out, "sequence ");
appendStringInfoString(ctx->out,
quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
RelationGetRelationName(rel)));
- appendStringInfo(ctx->out, ": transactional:%d last_value: " INT64_FORMAT " log_cnt: " INT64_FORMAT " is_called:%d",
- transactional, last_value, log_cnt, is_called);
+ appendStringInfo(ctx->out, ": last_value: %zu log_cnt: %zu is_called:%d",
+ last_value, log_cnt, is_called);
OutputPluginWrite(ctx, true);
}
@@ -1013,7 +1009,6 @@ pg_decode_stream_message(LogicalDecodingContext *ctx,
static void
pg_decode_stream_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
XLogRecPtr sequence_lsn, Relation rel,
- bool transactional,
int64 last_value, int64 log_cnt, bool is_called)
{
TestDecodingData *data = ctx->output_plugin_private;
@@ -1023,7 +1018,6 @@ pg_decode_stream_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
return;
/* output BEGIN if we haven't yet, but only for the transactional case */
- if (transactional)
{
if (data->skip_empty_xacts && !txndata->xact_wrote_changes)
{
@@ -1037,8 +1031,8 @@ pg_decode_stream_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
appendStringInfoString(ctx->out,
quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
RelationGetRelationName(rel)));
- appendStringInfo(ctx->out, ": transactional:%d last_value: " INT64_FORMAT " log_cnt: " INT64_FORMAT " is_called:%d",
- transactional, last_value, log_cnt, is_called);
+ appendStringInfo(ctx->out, ": last_value: " INT64_FORMAT " log_cnt: " INT64_FORMAT " is_called:%d",
+ last_value, log_cnt, is_called);
OutputPluginWrite(ctx, true);
}
diff --git a/src/backend/access/rmgrdesc/logicalmsgdesc.c b/src/backend/access/rmgrdesc/logicalmsgdesc.c
index 099e11a84e7..de77dcb0dc2 100644
--- a/src/backend/access/rmgrdesc/logicalmsgdesc.c
+++ b/src/backend/access/rmgrdesc/logicalmsgdesc.c
@@ -40,6 +40,16 @@ logicalmsg_desc(StringInfo buf, XLogReaderState *record)
sep = " ";
}
}
+ else if (info == XLOG_LOGICAL_SEQUENCE)
+ {
+ xl_logical_sequence *xlrec = (xl_logical_sequence *) rec;
+
+ appendStringInfo(buf, "rel %u/%u/%u last: %lu log_cnt: %lu is_called: %d",
+ xlrec->node.spcNode,
+ xlrec->node.dbNode,
+ xlrec->node.relNode,
+ xlrec->last, xlrec->log_cnt, xlrec->is_called);
+ }
}
const char *
@@ -48,5 +58,8 @@ logicalmsg_identify(uint8 info)
if ((info & ~XLR_INFO_MASK) == XLOG_LOGICAL_MESSAGE)
return "MESSAGE";
+ if ((info & ~XLR_INFO_MASK) == XLOG_LOGICAL_SEQUENCE)
+ return "SEQUENCE";
+
return NULL;
}
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index 3596a7d7345..5b6a07ca892 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -37,6 +37,7 @@
#include "catalog/storage.h"
#include "commands/async.h"
#include "commands/tablecmds.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "common/pg_prng.h"
#include "executor/spi.h"
@@ -2223,6 +2224,15 @@ CommitTransaction(void)
/* Prevent cancel/die interrupt while cleaning up */
HOLD_INTERRUPTS();
+ /*
+ * Write state of sequences to WAL. We need to do this early enough so
+ * that we can still write stuff to WAL, but probably before changing
+ * the transaction state.
+ *
+ * XXX Should this happen before/after holding the interrupts?
+ */
+ AtEOXact_Sequences(true);
+
/* Commit updates to the relation map --- do this as late as possible */
AtEOXact_RelationMap(true, is_parallel_worker);
@@ -2499,6 +2509,15 @@ PrepareTransaction(void)
/* Prevent cancel/die interrupt while cleaning up */
HOLD_INTERRUPTS();
+ /*
+ * Write state of sequences to WAL. We need to do this early enough so
+ * that we can still write stuff to WAL, but probably before changing
+ * the transaction state.
+ *
+ * XXX Should this happen before/after holding the interrupts?
+ */
+ AtEOXact_Sequences(true);
+
/*
* set the current transaction state information appropriately during
* prepare processing
@@ -2734,6 +2753,12 @@ AbortTransaction(void)
TransStateAsString(s->state));
Assert(s->parent == NULL);
+ /*
+ * For transaction abort we don't need to write anything to WAL, we just
+ * do cleanup of the hash table etc.
+ */
+ AtEOXact_Sequences(false);
+
/*
* set the current transaction state information appropriately during the
* abort processing
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 717bb0b2aa9..50c03b71dac 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -37,6 +37,7 @@
#include "miscadmin.h"
#include "nodes/makefuncs.h"
#include "parser/parse_type.h"
+#include "replication/message.h"
#include "storage/lmgr.h"
#include "storage/proc.h"
#include "storage/smgr.h"
@@ -75,6 +76,7 @@ typedef struct SeqTableData
{
Oid relid; /* pg_class OID of this sequence (hash key) */
Oid filenode; /* last seen relfilenode of this sequence */
+ Oid tablespace; /* last seen tablespace of this sequence */
LocalTransactionId lxid; /* xact in which we last did a seq op */
bool last_valid; /* do we have a valid "last" value? */
int64 last; /* value last returned by nextval */
@@ -82,6 +84,7 @@ typedef struct SeqTableData
/* if last != cached, we have not used up all the cached values */
int64 increment; /* copy of sequence's increment field */
/* note that increment is zero until we first do nextval_internal() */
+ bool need_log; /* should be written to WAL at commit? */
} SeqTableData;
typedef SeqTableData *SeqTable;
@@ -336,83 +339,13 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
-/*
- * Update the sequence state by modifying the existing sequence data row.
- *
- * This keeps the same relfilenode, so the behavior is non-transactional.
- */
-static void
-SetSequence_non_transactional(Oid seqrelid, int64 last_value, int64 log_cnt, bool is_called)
-{
- SeqTable elm;
- Relation seqrel;
- Buffer buf;
- HeapTupleData seqdatatuple;
- Form_pg_sequence_data seq;
-
- /* open and lock sequence */
- init_sequence(seqrelid, &elm, &seqrel);
-
- /* lock page' buffer and read tuple */
- seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
-
- /* check the comment above nextval_internal()'s equivalent call. */
- if (RelationNeedsWAL(seqrel))
- {
- GetTopTransactionId();
-
- if (XLogLogicalInfoActive())
- GetCurrentTransactionId();
- }
-
- /* ready to change the on-disk (or really, in-buffer) tuple */
- START_CRIT_SECTION();
-
- seq->last_value = last_value;
- seq->is_called = is_called;
- seq->log_cnt = log_cnt;
-
- MarkBufferDirty(buf);
-
- /* XLOG stuff */
- if (RelationNeedsWAL(seqrel))
- {
- xl_seq_rec xlrec;
- XLogRecPtr recptr;
- Page page = BufferGetPage(buf);
-
- XLogBeginInsert();
- XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
-
- xlrec.node = seqrel->rd_node;
- xlrec.created = false;
-
- XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
- XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
-
- recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);
-
- PageSetLSN(page, recptr);
- }
-
- END_CRIT_SECTION();
-
- UnlockReleaseBuffer(buf);
-
- /* Clear local cache so that we don't think we have cached numbers */
- /* Note that we do not change the currval() state */
- elm->cached = elm->last;
-
- relation_close(seqrel, NoLock);
-}
-
/*
* Update the sequence state by creating a new relfilenode.
*
* This creates a new relfilenode, to allow transactional behavior.
*/
-static void
-SetSequence_transactional(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+void
+SetSequence(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
{
SeqTable elm;
Relation seqrel;
@@ -469,27 +402,6 @@ SetSequence_transactional(Oid seq_relid, int64 last_value, int64 log_cnt, bool i
relation_close(seqrel, NoLock);
}
-/*
- * Set a sequence to a specified internal state.
- *
- * The change is made transactionally, so that on failure of the current
- * transaction, the sequence will be restored to its previous state.
- * We do that by creating a whole new relfilenode for the sequence; so this
- * works much like the rewriting forms of ALTER TABLE.
- *
- * Caller is assumed to have acquired AccessExclusiveLock on the sequence,
- * which must not be released until end of transaction. Caller is also
- * responsible for permissions checking.
- */
-void
-SetSequence(Oid seq_relid, bool transactional, int64 last_value, int64 log_cnt, bool is_called)
-{
- if (transactional)
- SetSequence_transactional(seq_relid, last_value, log_cnt, is_called);
- else
- SetSequence_non_transactional(seq_relid, last_value, log_cnt, is_called);
-}
-
/*
* Initialize a sequence's relation with the specified tuple as content
*/
@@ -530,7 +442,13 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
tuple->t_data->t_infomask |= HEAP_XMAX_INVALID;
ItemPointerSet(&tuple->t_data->t_ctid, 0, FirstOffsetNumber);
- /* check the comment above nextval_internal()'s equivalent call. */
+ /*
+ * Remember we need to write this sequence to WAL at commit, and make sure
+ * we have XID if we may need to decode this sequence.
+ *
+ * XXX Maybe we should not be doing this except when actually reading a
+ * value from the sequence?
+ */
if (RelationNeedsWAL(rel))
{
GetTopTransactionId();
@@ -558,7 +476,6 @@ fill_seq_with_data(Relation rel, HeapTuple tuple)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = rel->rd_node;
- xlrec.created = true;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) tuple->t_data, tuple->t_len);
@@ -762,6 +679,23 @@ nextval_internal(Oid relid, bool check_permissions)
/* open and lock sequence */
init_sequence(relid, &elm, &seqrel);
+ /*
+ * Remember we need to write this sequence to WAL at commit, and make sure
+ * we have XID if we may need to decode this sequence.
+ *
+ * XXX Maybe we should not be doing this except when actually reading a
+ * value from the sequence?
+ */
+ if (RelationNeedsWAL(seqrel))
+ {
+ elm->need_log = true;
+
+ GetTopTransactionId();
+
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
if (check_permissions &&
pg_class_aclcheck(elm->relid, GetUserId(),
ACL_USAGE | ACL_UPDATE) != ACLCHECK_OK)
@@ -973,7 +907,6 @@ nextval_internal(Oid relid, bool check_permissions)
seq->log_cnt = 0;
xlrec.node = seqrel->rd_node;
- xlrec.created = false;
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -1092,6 +1025,23 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* open and lock sequence */
init_sequence(relid, &elm, &seqrel);
+ /*
+ * Remember we need to write this sequence to WAL at commit, and make sure
+ * we have XID if we may need to decode this sequence.
+ *
+ * XXX Maybe we should not be doing this except when actually reading a
+ * value from the sequence?
+ */
+ if (RelationNeedsWAL(seqrel))
+ {
+ elm->need_log = true;
+
+ GetTopTransactionId();
+
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
if (pg_class_aclcheck(elm->relid, GetUserId(), ACL_UPDATE) != ACLCHECK_OK)
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
@@ -1166,8 +1116,6 @@ do_setval(Oid relid, int64 next, bool iscalled)
XLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);
xlrec.node = seqrel->rd_node;
- xlrec.created = false;
-
XLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));
XLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);
@@ -1316,6 +1264,7 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
if (seqrel->rd_rel->relfilenode != elm->filenode)
{
elm->filenode = seqrel->rd_rel->relfilenode;
+ elm->tablespace = seqrel->rd_rel->reltablespace;
elm->cached = elm->last;
}
@@ -2051,3 +2000,121 @@ seq_mask(char *page, BlockNumber blkno)
mask_unused_space(page);
}
+
+
+static void
+read_sequence_info(Relation seqrel, int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ Buffer buf;
+ Form_pg_sequence_data seq;
+ HeapTupleData seqdatatuple;
+
+ seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+
+ *last_value = seq->last_value;
+ *is_called = seq->is_called;
+ *log_cnt = seq->log_cnt;
+
+ UnlockReleaseBuffer(buf);
+}
+
+/* XXX Do this only for wal_level = logical, probably? */
+void
+AtEOXact_Sequences(bool isCommit)
+{
+ SeqTable entry;
+ HASH_SEQ_STATUS scan;
+
+ if (!seqhashtab)
+ return;
+
+ /* only do this with wal_level=logical */
+ if (!XLogLogicalInfoActive())
+ return;
+
+ /*
+ * XXX Maybe we could enforce having XID here? Or is it too late?
+ */
+ // Assert(GetTopTransactionId() != InvalidTransactionId);
+ // Assert(GetCurrentTransactionId() != InvalidTransactionId);
+
+ hash_seq_init(&scan, seqhashtab);
+
+ while ((entry = (SeqTable) hash_seq_search(&scan)))
+ {
+ Relation rel;
+ RelFileNode rnode;
+ xl_logical_sequence xlrec;
+
+ /* not commit, we don't guarantee any data to be durable */
+ if (!isCommit)
+ entry->need_log = false;
+
+ /*
+ * If not touched in the current transaction, don't log anything.
+ * We leave needs_log set, so that if future transactions touch
+ * the sequence we'll log it properly.
+ */
+ if (!entry->need_log)
+ continue;
+
+ /* if this is commit, we'll log the */
+ entry->need_log = false;
+
+ /*
+ * does the relation still exist?
+ *
+ * XXX We need to make sure another transaction can't jump ahead of
+ * this one, otherwise the ordering of sequence changes could change.
+ * Imagine T1 and T2, where T1 writes sequence state first, but then
+ * T2 does it too and commits first:
+ *
+ * T1: log sequence state
+ * T2: increment sequence
+ * T2: log sequence state
+ * T2: commit
+ * T1: commit
+ *
+ * If we apply the sequences in this order, it'd be broken as the
+ * value might go backwars. ShareUpdateExclusive protects against
+ * that, but it's also restricting commit throughtput, probably.
+ *
+ * XXX This might be an issue for deadlocks, too, if two xacts try
+ * to write sequences in different ordering. We may need to sort
+ * the OIDs first, to enforce the same lock ordering.
+ */
+ rel = try_relation_open(entry->relid, ShareUpdateExclusiveLock);
+
+ /* XXX relation might have been dropped, maybe we should have logged
+ * the last change? */
+ if (!rel)
+ continue;
+
+ /* tablespace */
+ if (OidIsValid(entry->tablespace))
+ rnode.spcNode = entry->tablespace;
+ else
+ rnode.spcNode = MyDatabaseTableSpace;
+
+ rnode.dbNode = MyDatabaseId; /* database */
+ rnode.relNode = entry->filenode; /* relation */
+
+ xlrec.node = rnode;
+ xlrec.reloid = entry->relid;
+
+ /* XXX is it good enough to log values we have in cache? seems
+ * wrong and we may need to re-read that. */
+ read_sequence_info(rel, &xlrec.last, &xlrec.log_cnt, &xlrec.is_called);
+
+ XLogBeginInsert();
+ XLogRegisterData((char *) &xlrec, SizeOfLogicalSequence);
+
+ /* allow origin filtering */
+ XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
+
+ (void) XLogInsert(RM_LOGICALMSG_ID, XLOG_LOGICAL_SEQUENCE);
+
+ /* hold the sequence lock until the end of the transaction */
+ relation_close(rel, NoLock);
+ }
+}
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 77bc7aea7a0..a8548f9ac64 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -44,6 +44,9 @@
#include "storage/standby.h"
#include "commands/sequence.h"
+static void DecodeLogicalMessage(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
+static void DecodeLogicalSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
+
/* individual record(group)'s handlers */
static void DecodeInsert(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
static void DecodeUpdate(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
@@ -64,7 +67,6 @@ static void DecodePrepare(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
/* common function to decode tuples */
static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);
-static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);
/* helper functions for decoding transactions */
static inline bool FilterPrepare(LogicalDecodingContext *ctx,
@@ -560,6 +562,27 @@ FilterByOrigin(LogicalDecodingContext *ctx, RepOriginId origin_id)
*/
void
logicalmsg_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ XLogReaderState *r = buf->record;
+ uint8 info = XLogRecGetInfo(r) & ~XLR_INFO_MASK;
+
+ switch (info)
+ {
+ case XLOG_LOGICAL_MESSAGE:
+ DecodeLogicalMessage(ctx, buf);
+ break;
+
+ case XLOG_LOGICAL_SEQUENCE:
+ DecodeLogicalSequence(ctx, buf);
+ break;
+
+ default:
+ elog(ERROR, "unexpected RM_LOGICALMSG_ID record type: %u", info);
+ }
+}
+
+static void
+DecodeLogicalMessage(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
{
SnapBuild *builder = ctx->snapshot_builder;
XLogReaderState *r = buf->record;
@@ -1253,36 +1276,6 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
-/*
- * DecodeSeqTuple
- * decode tuple describing the sequence increment
- *
- * Sequences are represented as a table with a single row, which gets updated
- * by nextval(). The tuple is stored in WAL right after the xl_seq_rec, so we
- * simply copy it into the tuplebuf (similar to seq_redo).
- */
-static void
-DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
-{
- int datalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;
-
- Assert(datalen >= 0);
-
- tuple->tuple.t_len = datalen + SizeofHeapTupleHeader;
-
- ItemPointerSetInvalid(&tuple->tuple.t_self);
-
- tuple->tuple.t_tableOid = InvalidOid;
-
- memcpy(((char *) tuple->tuple.t_data),
- data + sizeof(xl_seq_rec),
- SizeofHeapTupleHeader);
-
- memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,
- data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,
- datalen);
-}
-
/*
* Handle sequence decode
*
@@ -1301,24 +1294,18 @@ DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)
* plugin - it might get confused about which sequence it's related to etc.
*/
void
-sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+DecodeLogicalSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
{
SnapBuild *builder = ctx->snapshot_builder;
- ReorderBufferTupleBuf *tuplebuf;
RelFileNode target_node;
XLogReaderState *r = buf->record;
- char *tupledata = NULL;
- Size tuplelen;
- Size datalen = 0;
TransactionId xid = XLogRecGetXid(r);
uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
- xl_seq_rec *xlrec;
- Snapshot snapshot;
+ xl_logical_sequence *xlrec;
RepOriginId origin_id = XLogRecGetOrigin(r);
- bool transactional;
/* only decode changes flagged with XLOG_SEQ_LOG */
- if (info != XLOG_SEQ_LOG)
+ if (info != XLOG_LOGICAL_SEQUENCE)
elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
ReorderBufferProcessXid(ctx->reorder, XLogRecGetXid(r), buf->origptr);
@@ -1331,8 +1318,11 @@ sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
ctx->fast_forward)
return;
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_logical_sequence *) XLogRecGetData(r);
+
/* only interested in our database */
- XLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);
+ target_node = xlrec->node;
if (target_node.dbNode != ctx->slot->data.database)
return;
@@ -1340,44 +1330,12 @@ sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
return;
- tupledata = XLogRecGetData(r);
- datalen = XLogRecGetDataLen(r);
- tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);
-
- /* extract the WAL record, with "created" flag */
- xlrec = (xl_seq_rec *) XLogRecGetData(r);
-
- /* XXX how could we have sequence change without data? */
- if(!datalen || !tupledata)
- return;
-
- tuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);
- DecodeSeqTuple(tupledata, datalen, tuplebuf);
-
- /*
- * Should we handle the sequence increment as transactional or not?
- *
- * If the sequence was created in a still-running transaction, treat
- * it as transactional and queue the increments. Otherwise it needs
- * to be treated as non-transactional, in which case we send it to
- * the plugin right away.
- */
- transactional = ReorderBufferSequenceIsTransactional(ctx->reorder,
- target_node,
- xlrec->created);
-
/* Skip the change if already processed (per the snapshot). */
- if (transactional &&
- !SnapBuildProcessChange(builder, xid, buf->origptr))
- return;
- else if (!transactional &&
- (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
- SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ if (!SnapBuildProcessChange(builder, xid, buf->origptr))
return;
/* Queue the increment (or send immediately if not transactional). */
- snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
- ReorderBufferQueueSequence(ctx->reorder, xid, snapshot, buf->endptr,
- origin_id, target_node, transactional,
- xlrec->created, tuplebuf);
+ ReorderBufferQueueSequence(ctx->reorder, xid, buf->endptr,
+ origin_id, xlrec->reloid, target_node,
+ xlrec->last, xlrec->log_cnt, xlrec->is_called);
}
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index e1f14aeecb5..d00cc08fb3a 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -75,7 +75,6 @@ static void message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
const char *prefix, Size message_size, const char *message);
static void sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr sequence_lsn, Relation rel,
- bool transactional,
int64 last_value, int64 log_cnt, bool is_called);
/* streaming callbacks */
@@ -96,7 +95,6 @@ static void stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *tx
const char *prefix, Size message_size, const char *message);
static void stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr sequence_lsn, Relation rel,
- bool transactional,
int64 last_value, int64 log_cnt, bool is_called);
static void stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[], ReorderBufferChange *change);
@@ -1218,7 +1216,7 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
static void
sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
- XLogRecPtr sequence_lsn, Relation rel, bool transactional,
+ XLogRecPtr sequence_lsn, Relation rel,
int64 last_value, int64 log_cnt, bool is_called)
{
LogicalDecodingContext *ctx = cache->private_data;
@@ -1245,7 +1243,7 @@ sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
ctx->write_location = sequence_lsn;
/* do the actual work: call callback */
- ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel,
last_value, log_cnt, is_called);
/* Pop the error context stack */
@@ -1560,7 +1558,6 @@ stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
static void
stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr sequence_lsn, Relation rel,
- bool transactional,
int64 last_value, int64 log_cnt, bool is_called)
{
LogicalDecodingContext *ctx = cache->private_data;
@@ -1591,7 +1588,7 @@ stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
ctx->write_location = sequence_lsn;
/* do the actual work: call callback */
- ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel, transactional,
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel,
last_value, log_cnt, is_called);
/* Pop the error context stack */
diff --git a/src/backend/replication/logical/message.c b/src/backend/replication/logical/message.c
index 1c34912610e..69178dc1cfb 100644
--- a/src/backend/replication/logical/message.c
+++ b/src/backend/replication/logical/message.c
@@ -82,7 +82,8 @@ logicalmsg_redo(XLogReaderState *record)
{
uint8 info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
- if (info != XLOG_LOGICAL_MESSAGE)
+ if (info != XLOG_LOGICAL_MESSAGE &&
+ info != XLOG_LOGICAL_SEQUENCE)
elog(PANIC, "logicalmsg_redo: unknown op code %u", info);
/* This is only interesting for logical decoding, see decode.c. */
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 18d3cbb9248..ae078eed048 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -667,7 +667,7 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
*/
void
logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
- XLogRecPtr lsn, bool transactional,
+ XLogRecPtr lsn,
int64 last_value, int64 log_cnt, bool is_called)
{
uint8 flags = 0;
@@ -686,7 +686,6 @@ logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
relname = RelationGetRelationName(rel);
pq_sendstring(out, relname);
- pq_sendint8(out, transactional);
pq_sendint64(out, last_value);
pq_sendint64(out, log_cnt);
pq_sendint8(out, is_called);
@@ -706,7 +705,6 @@ logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
seqdata->seqname = pstrdup(pq_getmsgstring(in));
- seqdata->transactional = pq_getmsgint(in, 1);
seqdata->last_value = pq_getmsgint64(in);
seqdata->log_cnt = pq_getmsgint64(in);
seqdata->is_called = pq_getmsgint(in, 1);
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 4702750a2e7..0cc7c8d3bc4 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -77,39 +77,7 @@
* a bit more memory to the oldest subtransactions, because it's likely
* they are the source for the next sequence of changes.
*
- * When decoding sequences, we differentiate between a sequences created
- * in a (running) transaction, and sequences created in other (already
- * committed) transactions. Changes for sequences created in the same
- * top-level transaction are treated as "transactional" i.e. just like
- * any other change from that transaction (and discarded in case of a
- * rollback). Changes for sequences created earlier are treated as not
- * transactional - are processed immediately, as if performed outside
- * any transaction (and thus not rolled back).
- *
- * This mixed behavior is necessary - sequences are non-transactional
- * (e.g. ROLLBACK does not undo the sequence increments). But for new
- * sequences, we need to handle them in a transactional way, because if
- * we ever get some DDL support, the sequence won't exist until the
- * transaction gets applied. So we need to ensure the increments don't
- * happen until the sequence gets created.
- *
- * To differentiate which sequences are "old" and which were created
- * in a still-running transaction, we track sequences created in running
- * transactions in a hash table. Sequences are identified by relfilenode,
- * and we track XID of the (sub)transaction that created it. This means
- * that if a transaction does something that changes the relfilenode
- * (like an alter / reset of a sequence), the new relfilenode will be
- * treated as if created in the transaction. The list of sequences gets
- * discarded when the transaction completes (commit/rollback).
- *
- * We don't use the XID to check if it's the same top-level transaction.
- * It's enough to know it was created in an in-progress transaction,
- * and we know it must be the current one because otherwise it wouldn't
- * see the sequence object.
- *
- * The XID may be valid even for non-transactional sequences - we simply
- * keep the XID logged to WAL, it's up to the reorderbuffer to decide if
- * the increment is transactional.
+ * FIXME
*
* -------------------------------------------------------------------------
*/
@@ -151,13 +119,6 @@ typedef struct ReorderBufferTXNByIdEnt
ReorderBufferTXN *txn;
} ReorderBufferTXNByIdEnt;
-/* entry for hash table we use to track sequences created in running xacts */
-typedef struct ReorderBufferSequenceEnt
-{
- RelFileNode rnode;
- TransactionId xid;
-} ReorderBufferSequenceEnt;
-
/* data structures for (relfilenode, ctid) => (cmin, cmax) mapping */
typedef struct ReorderBufferTupleCidKey
{
@@ -388,14 +349,6 @@ ReorderBufferAllocate(void)
buffer->by_txn = hash_create("ReorderBufferByXid", 1000, &hash_ctl,
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
- /* hash table of sequences, mapping relfilenode to XID of transaction */
- hash_ctl.keysize = sizeof(RelFileNode);
- hash_ctl.entrysize = sizeof(ReorderBufferSequenceEnt);
- hash_ctl.hcxt = buffer->context;
-
- buffer->sequences = hash_create("ReorderBufferSequenceHash", 1000, &hash_ctl,
- HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
-
buffer->by_txn_last_xid = InvalidTransactionId;
buffer->by_txn_last_txn = NULL;
@@ -582,17 +535,11 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
change->data.truncate.relids = NULL;
}
break;
- case REORDER_BUFFER_CHANGE_SEQUENCE:
- if (change->data.sequence.tuple)
- {
- ReorderBufferReturnTupleBuf(rb, change->data.sequence.tuple);
- change->data.sequence.tuple = NULL;
- }
- break;
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
break;
}
@@ -923,57 +870,6 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
}
}
-/*
- * Treat the sequence increment as transactional?
- *
- * The hash table tracks all sequences created in in-progress transactions,
- * so we simply do a lookup (the sequence is identified by relfilende). If
- * we find a match, the increment should be handled as transactional.
- */
-bool
-ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
- RelFileNode rnode, bool created)
-{
- bool found = false;
-
- if (created)
- return true;
-
- hash_search(rb->sequences,
- (void *) &rnode,
- HASH_FIND,
- &found);
-
- return found;
-}
-
-/*
- * Cleanup sequences created in in-progress transactions.
- *
- * There's no way to search by XID, so we simply do a seqscan of all
- * the entries in the hash table. Hopefully there are only a couple
- * entries in most cases - people generally don't create many new
- * sequences over and over.
- */
-static void
-ReorderBufferSequenceCleanup(ReorderBuffer *rb, TransactionId xid)
-{
- HASH_SEQ_STATUS scan_status;
- ReorderBufferSequenceEnt *ent;
-
- hash_seq_init(&scan_status, rb->sequences);
- while ((ent = (ReorderBufferSequenceEnt *) hash_seq_search(&scan_status)) != NULL)
- {
- /* skip sequences not from this transaction */
- if (ent->xid != xid)
- continue;
-
- (void) hash_search(rb->sequences,
- (void *) &(ent->rnode),
- HASH_REMOVE, NULL);
- }
-}
-
/*
* A transactional sequence increment is queued to be processed upon commit
* and a non-transactional increment gets processed immediately.
@@ -985,166 +881,32 @@ ReorderBufferSequenceCleanup(ReorderBuffer *rb, TransactionId xid)
*/
void
ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
- Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
- RelFileNode rnode, bool transactional, bool created,
- ReorderBufferTupleBuf *tuplebuf)
+ XLogRecPtr lsn, RepOriginId origin_id,
+ Oid reloid, RelFileNode rnode,
+ int64 last, int64 log_cnt, bool is_called)
{
- /*
- * Change needs to be handled as transactional, because the sequence was
- * created in a transaction that is still running. In that case all the
- * changes need to be queued in that transaction, we must not send them
- * to the downstream until the transaction commits.
- *
- * There's a bit of a trouble with subtransactions - we can't queue it
- * into the subxact, because it might be rolled back and we'd lose the
- * increment. We need to queue it into the same (sub)xact that created
- * the sequence, which is why we track the XID in the hash table.
- */
- if (transactional)
- {
- MemoryContext oldcontext;
- ReorderBufferChange *change;
-
- /* lookup sequence by relfilenode */
- ReorderBufferSequenceEnt *ent;
- bool found;
-
- /* transactional changes require a transaction */
- Assert(xid != InvalidTransactionId);
-
- /* search the lookup table (we ignore the return value, found is enough) */
- ent = hash_search(rb->sequences,
- (void *) &rnode,
- created ? HASH_ENTER : HASH_FIND,
- &found);
-
- /*
- * If this is the "create" increment, we must not have found any
- * pre-existing entry in the hash table (i.e. there must not be
- * any conflicting sequence).
- */
- Assert(!(created && found));
-
- /* But we must have either created or found an existing entry. */
- Assert(created || found);
-
- /*
- * When creating the sequence, remember the XID of the transaction
- * that created id.
- */
- if (created)
- ent->xid = xid;
-
- /* XXX Maybe check that we're still in the same top-level xact? */
-
- /* OK, allocate and queue the change */
- oldcontext = MemoryContextSwitchTo(rb->context);
-
- change = ReorderBufferGetChange(rb);
-
- change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
- change->origin_id = origin_id;
-
- memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
-
- change->data.sequence.tuple = tuplebuf;
-
- /* add it to the same subxact that created the sequence */
- ReorderBufferQueueChange(rb, ent->xid, lsn, change, false);
-
- MemoryContextSwitchTo(oldcontext);
- }
- else
- {
- /*
- * This increment is for a sequence that was not created in any
- * running transaction, so we treat it as non-transactional and
- * just send it to the output plugin directly.
- */
- ReorderBufferTXN *txn = NULL;
- volatile Snapshot snapshot_now = snapshot;
- bool using_subtxn;
-
-#ifdef USE_ASSERT_CHECKING
- /* All "creates" have to be handled as transactional. */
- Assert(!created);
-
- /* Make sure the sequence is not in the hash table. */
- {
- bool found;
- hash_search(rb->sequences,
- (void *) &rnode,
- HASH_FIND, &found);
- Assert(!found);
- }
-#endif
-
- if (xid != InvalidTransactionId)
- txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);
-
- /* setup snapshot to allow catalog access */
- SetupHistoricSnapshot(snapshot_now, NULL);
-
- /*
- * Decoding needs access to syscaches et al., which in turn use
- * heavyweight locks and such. Thus we need to have enough state around to
- * keep track of those. The easiest way is to simply use a transaction
- * internally. That also allows us to easily enforce that nothing writes
- * to the database by checking for xid assignments.
- *
- * When we're called via the SQL SRF there's already a transaction
- * started, so start an explicit subtransaction there.
- */
- using_subtxn = IsTransactionOrTransactionBlock();
-
- PG_TRY();
- {
- Relation relation;
- HeapTuple tuple;
- Form_pg_sequence_data seq;
- Oid reloid;
-
- if (using_subtxn)
- BeginInternalSubTransaction("sequence");
- else
- StartTransactionCommand();
-
- reloid = RelidByRelfilenode(rnode.spcNode, rnode.relNode);
-
- if (reloid == InvalidOid)
- elog(ERROR, "could not map filenode \"%s\" to relation OID",
- relpathperm(rnode,
- MAIN_FORKNUM));
-
- relation = RelationIdGetRelation(reloid);
- tuple = &tuplebuf->tuple;
- seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ MemoryContext oldcontext;
+ ReorderBufferChange *change;
- rb->sequence(rb, txn, lsn, relation, transactional,
- seq->last_value, seq->log_cnt, seq->is_called);
+ /* OK, allocate and queue the change */
+ oldcontext = MemoryContextSwitchTo(rb->context);
- RelationClose(relation);
+ change = ReorderBufferGetChange(rb);
- TeardownHistoricSnapshot(false);
+ change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
+ change->origin_id = origin_id;
- AbortCurrentTransaction();
+ memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
- if (using_subtxn)
- RollbackAndReleaseCurrentSubTransaction();
- }
- PG_CATCH();
- {
- TeardownHistoricSnapshot(true);
+ change->data.sequence.reloid = reloid;
+ change->data.sequence.last = last;
+ change->data.sequence.log_cnt = log_cnt;
+ change->data.sequence.is_called = is_called;
- AbortCurrentTransaction();
-
- if (using_subtxn)
- RollbackAndReleaseCurrentSubTransaction();
+ /* add it to the same subxact that created the sequence */
+ ReorderBufferQueueChange(rb, xid, lsn, change, false);
- PG_RE_THROW();
- }
- PG_END_TRY();
- }
+ MemoryContextSwitchTo(oldcontext);
}
/*
@@ -1823,9 +1585,6 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)
&found);
Assert(found);
- /* Remove sequences created in this transaction (if any). */
- ReorderBufferSequenceCleanup(rb, txn->xid);
-
/* remove entries spilled to disk */
if (rbtxn_is_serialized(txn))
ReorderBufferRestoreCleanup(rb, txn);
@@ -2249,19 +2008,23 @@ ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
Relation relation, ReorderBufferChange *change,
bool streaming)
{
- HeapTuple tuple;
- Form_pg_sequence_data seq;
-
- tuple = &change->data.sequence.tuple->tuple;
- seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ int64 last_value, log_cnt;
+ bool is_called;
+/*
+ * FIXME is it possible that we write sequence state for T1 before T2, but
+ * then end up committing T2 first? If the sequence could be incremented
+ * in between, that might cause data corruption result.
+ */
+ last_value = change->data.sequence.last;
+ log_cnt = change->data.sequence.log_cnt;
+ is_called = change->data.sequence.is_called;
- /* Only ever called from ReorderBufferApplySequence, so transational. */
if (streaming)
- rb->stream_sequence(rb, txn, change->lsn, relation, true,
- seq->last_value, seq->log_cnt, seq->is_called);
+ rb->stream_sequence(rb, txn, change->lsn, relation,
+ last_value, log_cnt, is_called);
else
- rb->sequence(rb, txn, change->lsn, relation, true,
- seq->last_value, seq->log_cnt, seq->is_called);
+ rb->sequence(rb, txn, change->lsn, relation,
+ last_value, log_cnt, is_called);
}
/*
@@ -2710,15 +2473,17 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_SEQUENCE:
Assert(snapshot_now);
+ /*
reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
change->data.sequence.relnode.relNode);
if (reloid == InvalidOid)
- elog(ERROR, "could not map filenode \"%s\" to relation OID",
+ elog(ERROR, "zz could not map filenode \"%s\" to relation OID",
relpathperm(change->data.sequence.relnode,
MAIN_FORKNUM));
- relation = RelationIdGetRelation(reloid);
+ */
+ relation = RelationIdGetRelation(change->data.sequence.reloid);
if (!RelationIsValid(relation))
elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
@@ -4115,45 +3880,13 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
memcpy(data, change->data.truncate.relids, size);
data += size;
- break;
- }
- case REORDER_BUFFER_CHANGE_SEQUENCE:
- {
- char *data;
- ReorderBufferTupleBuf *tup;
- Size len = 0;
-
- tup = change->data.sequence.tuple;
-
- if (tup)
- {
- sz += sizeof(HeapTupleData);
- len = tup->tuple.t_len;
- sz += len;
- }
-
- /* make sure we have enough space */
- ReorderBufferSerializeReserve(rb, sz);
-
- data = ((char *) rb->outbuf) + sizeof(ReorderBufferDiskChange);
- /* might have been reallocated above */
- ondisk = (ReorderBufferDiskChange *) rb->outbuf;
-
- if (len)
- {
- memcpy(data, &tup->tuple, sizeof(HeapTupleData));
- data += sizeof(HeapTupleData);
-
- memcpy(data, tup->tuple.t_data, len);
- data += len;
- }
-
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
/* ReorderBufferChange contains everything important */
break;
}
@@ -4412,28 +4145,13 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
{
sz += sizeof(Oid) * change->data.truncate.nrelids;
- break;
- }
- case REORDER_BUFFER_CHANGE_SEQUENCE:
- {
- ReorderBufferTupleBuf *tup;
- Size len = 0;
-
- tup = change->data.sequence.tuple;
-
- if (tup)
- {
- sz += sizeof(HeapTupleData);
- len = tup->tuple.t_len;
- sz += len;
- }
-
break;
}
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
/* ReorderBufferChange contains everything important */
break;
}
@@ -4729,34 +4447,11 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
break;
}
-
- case REORDER_BUFFER_CHANGE_SEQUENCE:
- if (change->data.sequence.tuple)
- {
- uint32 tuplelen = ((HeapTuple) data)->t_len;
-
- change->data.sequence.tuple =
- ReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);
-
- /* restore ->tuple */
- memcpy(&change->data.sequence.tuple->tuple, data,
- sizeof(HeapTupleData));
- data += sizeof(HeapTupleData);
-
- /* reset t_data pointer into the new tuplebuf */
- change->data.sequence.tuple->tuple.t_data =
- ReorderBufferTupleBufData(change->data.sequence.tuple);
-
- /* restore tuple data itself */
- memcpy(change->data.sequence.tuple->tuple.t_data, data, tuplelen);
- data += tuplelen;
- }
- break;
-
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM:
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
break;
}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 697fb23634c..d6adbd6e46d 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -1220,8 +1220,7 @@ copy_sequence(Relation rel)
fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
- /* tablesync sets the sequences in non-transactional way */
- SetSequence(RelationGetRelid(rel), false, last_value, log_cnt, is_called);
+ SetSequence(RelationGetRelid(rel), last_value, log_cnt, is_called);
logicalrep_rel_close(relmapentry, NoLock);
}
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index f3868b3e1f8..fbc65f277ea 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -1159,12 +1159,9 @@ apply_handle_sequence(StringInfo s)
logicalrep_read_sequence(s, &seq);
/*
- * Non-transactional sequence updates should not be part of a remote
- * transaction. There should not be any running transaction.
+ * We should be in remote transaction.
*/
- Assert((!seq.transactional) || in_remote_transaction);
- Assert(!(!seq.transactional && in_remote_transaction));
- Assert(!(!seq.transactional && IsTransactionState()));
+ Assert(in_remote_transaction);
/*
* Make sure we're in a transaction (needed by SetSequence). For
@@ -1185,14 +1182,7 @@ apply_handle_sequence(StringInfo s)
LockRelationOid(relid, AccessExclusiveLock);
/* apply the sequence change */
- SetSequence(relid, seq.transactional, seq.last_value, seq.log_cnt, seq.is_called);
-
- /*
- * Commit the per-stream transaction (we only do this when not in
- * remote transaction, i.e. for non-transactional sequence updates.
- */
- if (!in_remote_transaction)
- CommitTransactionCommand();
+ SetSequence(relid, seq.last_value, seq.log_cnt, seq.is_called);
}
/*
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 9d33630464c..31be8064ebe 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -57,7 +57,7 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
Size sz, const char *message);
static void pgoutput_sequence(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
- Relation relation, bool transactional,
+ Relation relation,
int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
@@ -1712,12 +1712,13 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
static void
pgoutput_sequence(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
- Relation relation, bool transactional,
+ Relation relation,
int64 last_value, int64 log_cnt, bool is_called)
{
PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
TransactionId xid = InvalidTransactionId;
RelationSyncEntry *relentry;
+ PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;
if (!data->sequences)
return;
@@ -1742,25 +1743,15 @@ pgoutput_sequence(LogicalDecodingContext *ctx,
if (!relentry->pubactions.pubsequence)
return;
- /*
- * Output BEGIN if we haven't yet. Avoid for non-transactional
- * sequence changes.
- */
- if (transactional)
- {
- PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;
-
- /* Send BEGIN if we haven't yet */
- if (txndata && !txndata->sent_begin_txn)
- pgoutput_send_begin(ctx, txn);
- }
+ /* Send BEGIN if we haven't yet */
+ if (txndata && !txndata->sent_begin_txn)
+ pgoutput_send_begin(ctx, txn);
OutputPluginPrepareWrite(ctx, true);
logicalrep_write_sequence(ctx->out,
relation,
xid,
sequence_lsn,
- transactional,
last_value,
log_cnt,
is_called);
diff --git a/src/include/access/rmgrlist.h b/src/include/access/rmgrlist.h
index cf8b6d48193..9a74721c97c 100644
--- a/src/include/access/rmgrlist.h
+++ b/src/include/access/rmgrlist.h
@@ -40,7 +40,7 @@ PG_RMGR(RM_BTREE_ID, "Btree", btree_redo, btree_desc, btree_identify, btree_xlog
PG_RMGR(RM_HASH_ID, "Hash", hash_redo, hash_desc, hash_identify, NULL, NULL, hash_mask, NULL)
PG_RMGR(RM_GIN_ID, "Gin", gin_redo, gin_desc, gin_identify, gin_xlog_startup, gin_xlog_cleanup, gin_mask, NULL)
PG_RMGR(RM_GIST_ID, "Gist", gist_redo, gist_desc, gist_identify, gist_xlog_startup, gist_xlog_cleanup, gist_mask, NULL)
-PG_RMGR(RM_SEQ_ID, "Sequence", seq_redo, seq_desc, seq_identify, NULL, NULL, seq_mask, sequence_decode)
+PG_RMGR(RM_SEQ_ID, "Sequence", seq_redo, seq_desc, seq_identify, NULL, NULL, seq_mask, NULL)
PG_RMGR(RM_SPGIST_ID, "SPGist", spg_redo, spg_desc, spg_identify, spg_xlog_startup, spg_xlog_cleanup, spg_mask, NULL)
PG_RMGR(RM_BRIN_ID, "BRIN", brin_redo, brin_desc, brin_identify, NULL, NULL, brin_mask, NULL)
PG_RMGR(RM_COMMIT_TS_ID, "CommitTs", commit_ts_redo, commit_ts_desc, commit_ts_identify, NULL, NULL, NULL, NULL)
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 5bab90db8e0..0bc90c922ba 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -48,7 +48,6 @@ typedef FormData_pg_sequence_data *Form_pg_sequence_data;
typedef struct xl_seq_rec
{
RelFileNode node;
- bool created; /* creates a new relfilenode (CREATE/ALTER) */
/* SEQUENCE TUPLE DATA FOLLOWS AT THE END */
} xl_seq_rec;
@@ -60,9 +59,11 @@ extern ObjectAddress DefineSequence(ParseState *pstate, CreateSeqStmt *stmt);
extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
-extern void SetSequence(Oid seq_relid, bool transactional, int64 last_value, int64 log_cnt, bool is_called);
+extern void SetSequence(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
+extern void AtEOXact_Sequences(bool isCommit);
+
extern void seq_redo(XLogReaderState *rptr);
extern void seq_desc(StringInfo buf, XLogReaderState *rptr);
extern const char *seq_identify(uint8 info);
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index 13ee10fdd4e..a466db394b8 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -125,7 +125,6 @@ typedef struct LogicalRepSequence
Oid remoteid; /* unique id of the remote sequence */
char *nspname; /* schema name of remote sequence */
char *seqname; /* name of the remote sequence */
- bool transactional;
int64 last_value;
int64 log_cnt;
bool is_called;
@@ -245,7 +244,6 @@ extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecP
bool transactional, const char *prefix, Size sz, const char *message);
extern void logicalrep_write_sequence(StringInfo out, Relation rel,
TransactionId xid, XLogRecPtr lsn,
- bool transactional,
int64 last_value, int64 log_cnt,
bool is_called);
extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
diff --git a/src/include/replication/message.h b/src/include/replication/message.h
index 7d7785292f1..a084acc06ab 100644
--- a/src/include/replication/message.h
+++ b/src/include/replication/message.h
@@ -27,13 +27,28 @@ typedef struct xl_logical_message
char message[FLEXIBLE_ARRAY_MEMBER];
} xl_logical_message;
+/*
+ * Generic logical decoding sequence wal record.
+ */
+typedef struct xl_logical_sequence
+{
+ RelFileNode node;
+ Oid reloid;
+ int64 last; /* last value emitted for sequence */
+ int64 log_cnt; /* last value cached for sequence */
+ bool is_called;
+} xl_logical_sequence;
+
#define SizeOfLogicalMessage (offsetof(xl_logical_message, message))
+#define SizeOfLogicalSequence (sizeof(xl_logical_sequence))
extern XLogRecPtr LogLogicalMessage(const char *prefix, const char *message,
size_t size, bool transactional);
/* RMGR API*/
#define XLOG_LOGICAL_MESSAGE 0x00
+#define XLOG_LOGICAL_SEQUENCE 0x10
+
void logicalmsg_redo(XLogReaderState *record);
void logicalmsg_desc(StringInfo buf, XLogReaderState *record);
const char *logicalmsg_identify(uint8 info);
diff --git a/src/include/replication/output_plugin.h b/src/include/replication/output_plugin.h
index fe85d49a030..abc05644516 100644
--- a/src/include/replication/output_plugin.h
+++ b/src/include/replication/output_plugin.h
@@ -95,7 +95,6 @@ typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
XLogRecPtr sequence_lsn,
Relation rel,
- bool transactional,
int64 last_value,
int64 log_cnt,
bool is_called);
@@ -219,7 +218,6 @@ typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ct
ReorderBufferTXN *txn,
XLogRecPtr sequence_lsn,
Relation rel,
- bool transactional,
int64 last_value,
int64 log_cnt,
bool is_called);
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index 0bcc150b331..260630c2ba5 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -164,7 +164,10 @@ typedef struct ReorderBufferChange
struct
{
RelFileNode relnode;
- ReorderBufferTupleBuf *tuple;
+ Oid reloid;
+ int64 last;
+ int64 log_cnt;
+ bool is_called;
} sequence;
} data;
@@ -443,7 +446,6 @@ typedef void (*ReorderBufferSequenceCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn,
XLogRecPtr sequence_lsn,
Relation rel,
- bool transactional,
int64 last_value, int64 log_cnt,
bool is_called);
@@ -518,7 +520,6 @@ typedef void (*ReorderBufferStreamSequenceCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn,
XLogRecPtr sequence_lsn,
Relation rel,
- bool transactional,
int64 last_value, int64 log_cnt,
bool is_called);
@@ -670,9 +671,9 @@ void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapsho
bool transactional, const char *prefix,
Size message_size, const char *message);
void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
- Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,
- RelFileNode rnode, bool transactional, bool created,
- ReorderBufferTupleBuf *tuplebuf);
+ XLogRecPtr lsn, RepOriginId origin_id,
+ Oid reloid, RelFileNode rnode,
+ int64 last, int64 log_cnt, bool is_called);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
@@ -720,7 +721,4 @@ void ReorderBufferSetRestartPoint(ReorderBuffer *, XLogRecPtr ptr);
void StartupReorderBuffer(void);
-bool ReorderBufferSequenceIsTransactional(ReorderBuffer *rb,
- RelFileNode rnode, bool created);
-
#endif
--
2.34.1
On 4/6/22 16:13, Tomas Vondra wrote:
On 4/5/22 12:06, Amit Kapila wrote:
On Mon, Apr 4, 2022 at 3:10 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:I did some experiments over the weekend, exploring how to rework the
sequence decoding in various ways. Let me share some WIP patches,
hopefully that can be useful for trying more stuff and moving this
discussion forward.I tried two things - (1) accumulating sequence increments in global
array and then doing something with it, and (2) treating all sequence
increments as regular changes (in a TXN) and then doing something
special during the replay. Attached are two patchsets, one for each
approach.Note: It's important to remember decoding of sequences is not the only
code affected by this. The logical messages have the same issue,
certainly when it comes to transactional vs. non-transactional stuff and
handling of snapshots. Even if the sequence decoding ends up being
reverted, we still need to fix that, somehow. And my feeling is the
solutions ought to be pretty similar in both cases.Now, regarding the two approaches:
(1) accumulating sequences in global hash table
The main problem with regular sequence increments is that those need to
be non-transactional - a transaction may use a sequence without any
WAL-logging, if the WAL was written by an earlier transaction. The
problem is the earlier trasaction might have been rolled back, and thus
simply discarded by the logical decoding. But we still need to apply
that, in order not to lose the sequence increment.The current code just applies those non-transactional increments right
after decoding the increment, but that does not work because we may not
have a snapshot at that point. And we only have the snapshot when within
a transaction (AFAICS) so this queues all changes and then applies the
changes later.The changes need to be shared by all transactions, so queueing them in a
global works fairly well - otherwise we'd have to walk all transactions,
in order to see if there are relevant sequence increments.But some increments may be transactional, e.g. when the sequence is
created or altered in a transaction. To allow tracking this, this uses a
hash table, with relfilenode as a key.There's a couple issues with this, though. Firstly, stashing the changes
outside transactions, it's not included in memory accounting, it's not
spilled to disk or streamed, etc. I guess fixing this is possible, but
it's certainly not straightforward, because we mix increments from many
different transactions.A bigger issue is that I'm not sure this actually handles the snapshots
correctly either.The non-transactional increments affect all transactions, so when
ReorderBufferProcessSequences gets executed, it processes all of them,
no matter the source transaction. Can we be sure the snapshot in the
applying transaction is the same (or "compatible") as the snapshot in
the source transaction?I don't think we can assume that. I think it is possible that some
other transaction's WAL can be in-between start/end lsn of txn (which
we decide to send) which may not finally reach a consistent state.
Consider a case similar to shown in one of my previous emails:
Session-2:
Begin;
SELECT pg_current_xact_id();Session-1:
SELECT 'init' FROM pg_create_logical_replication_slot('test_slot',
'test_decoding', false, true);Session-3:
Begin;
SELECT pg_current_xact_id();Session-2:
Commit;
Begin;
INSERT INTO t1_seq SELECT nextval('seq1') FROM generate_series(1,100);Session-3:
Commit;Session-2:
Commit;Here, we send changes (say insert from txn 700) from session-2 because
session-3's commit happens before it. Now, consider another
transaction parallel to txn 700 which generates some WAL related to
sequences but it committed before session-3's commit. So though, its
changes will be the in-between start/end LSN of txn 700 but those
shouldn't be sent.I have not tried this and also this may be solvable in some way but I
think processing changes from other TXNs sounds risky to me in terms
of snapshot handling.Yes, I know this can happen. I was only really thinking about what might
happen to the relfilenode of the sequence itself - and I don't think any
concurrent transaction could swoop in and change the relfilenode in any
meaningful way, due to locking.But of course, if we expect/require to have a perfect snapshot for that
exact position in the transaction, this won't work. IMO the whole idea
that we can have non-transactional bits in naturally transactional
decoding seems a bit suspicious (at least in hindsight).No matter what we do for sequences, though, this still affects logical
messages too. Not sure what to do there :-((2) treating sequence change as regular changes
This adopts a different approach - instead of accumulating the sequence
increments in a global hash table, it treats them as regular changes.
Which solves the snapshot issue, and issues with spilling to disk,
streaming and so on.But it has various other issues with handling concurrent transactions,
unfortunately, which probably make this approach infeasible:* The non-transactional stuff has to be applied in the first transaction
that commits, not in the transaction that generated the WAL. That does
not work too well with this approach, because we have to walk changes in
all other transactions.Why do you want to traverse other TXNs in this approach? Is it because
the current TXN might be using some value of sequence which has been
actually WAL logged in the other transaction but that other
transaction has not been sent yet? I think if we don't send that then
probably replica sequences columns (in some tables) have some values
but actually the sequence itself won't have still that value which
sounds problematic. Is that correct?Well, how else would you get to sequence changes in the other TXNs?
Consider this:
T1: begin
T2: beginT2: nextval('s') -> writes WAL for 32 values
T1: nextval('s') -> gets value without WALT1: commit
T2: commitNow, if we commit T1 without "applying" the sequence change from T2, we
loose the sequence state. But we still write/replicate the value
generated from the sequence.* Another serious issue seems to be streaming - if we already streamed
some of the changes, we can't iterate through them anymore.Also, having to walk the transactions over and over for each change, to
apply relevant sequence increments, that's mighty expensive. The other
approach needs to do that too, but walking the global hash table seems
much cheaper.The other issue this handling of aborted transactions - we need to apply
sequence increments even from those transactions, of course. The other
approach has this issue too, though.(3) tracking sequences touched by transaction
This is the approach proposed by Hannu Krosing. I haven't explored this
again yet, but I recall I wrote a PoC patch a couple months back.It seems to me most of the problems stems from trying to derive sequence
state from decoded WAL changes, which is problematic because of the
non-transactional nature of sequences (i.e. WAL for one transaction
affects other transactions in non-obvious ways). And this approach
simply works around that entirely - instead of trying to deduce the
sequence state from WAL, we'd make sure to write the current sequence
state (or maybe just ID of the sequence) at commit time. Which should
eliminate most of the complexity / problems, I think.That sounds promising but I haven't thought in detail about that approach.
So, here's a patch doing that. It's a reworked/improved version of the
patch [1] shared in November.It seems to be working pretty nicely. The behavior is a little bit
different, of course, because we only replicate "committed" changes, so
if you do nextval() in aborted transaction that is not replicated. Which
I think is fine, because we generally make no durability guarantees for
aborted transactions in general.But there are a couple issues too:
1) locking
We have to read sequence change before the commit, but we must not allow
reordering (because then the state might go backwards again). I'm not
sure how serious impact could this have on performance.2) dropped sequences
I'm not sure what to do about sequences dropped in the transaction. The
patch simply attempts to read the current sequence state before the
commit, but if the sequence was dropped (in that transaction), that
can't happen. I'm not sure if that's OK or not.3) WAL record
To replicate the stuff the patch uses a LogicalMessage, but I guess a
separate WAL record would be better. But that's a technical detail.regards
[1]
/messages/by-id/2cd38bab-c874-8e0b-98e7-d9abaaf9806a@enterprisedb.comI'm not really sure what to do about this. All of those reworks seems
like an extensive redesign of the patch, and considering the last CF is
already over ... not great.Yeah, I share the same feeling that even if we devise solutions to all
the known problems it requires quite some time to ensure everything is
correct.True. Let's keep working on this for a bit more time and then we can
decide what to do.
I've pushed a revert af all the commits related to this - decoding of
sequences and test_decoding / built-in replication changes. The approach
combining transactional and non-transactional behavior implemented by
the patch clearly has issues, and it seems foolish to hope we'll find a
simple fix. So the changes would have to be much more extensive, and
doing that after the last CF seems like an obviously bad idea.
Attached is a rebased patch, implementing the approach based on
WAL-logging sequences at commit time.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachments:
decoding-sequences-tracking-20220407.patchtext/x-patch; charset=UTF-8; name=decoding-sequences-tracking-20220407.patchDownload
diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index b2209064790..36929dd97d3 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -5,7 +5,8 @@ PGFILEDESC = "test_decoding - example of a logical decoding output plugin"
REGRESS = ddl xact rewrite toast permissions decoding_in_xact \
decoding_into_rel binary prepared replorigin time messages \
- spill slot truncate stream stats twophase twophase_stream
+ spill slot truncate stream stats twophase twophase_stream \
+ sequence
ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \
oldest_xmin snapshot_transfer subxact_without_top concurrent_stream \
twophase_snapshot slot_creation_error
diff --git a/contrib/test_decoding/expected/ddl.out b/contrib/test_decoding/expected/ddl.out
index 9a28b5ddc5a..1e37c8c8979 100644
--- a/contrib/test_decoding/expected/ddl.out
+++ b/contrib/test_decoding/expected/ddl.out
@@ -40,7 +40,7 @@ SELECT 'init' FROM pg_create_physical_replication_slot('repl');
init
(1 row)
-SELECT data FROM pg_logical_slot_get_changes('repl', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('repl', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
ERROR: cannot use physical replication slot for logical decoding
SELECT pg_drop_replication_slot('repl');
pg_drop_replication_slot
@@ -89,7 +89,7 @@ COMMIT;
ALTER TABLE replication_example RENAME COLUMN text TO somenum;
INSERT INTO replication_example(somedata, somenum) VALUES (4, 1);
-- collect all changes
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
---------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -118,7 +118,7 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'inc
ALTER TABLE replication_example ALTER COLUMN somenum TYPE int4 USING (somenum::int4);
-- check that this doesn't produce any changes from the heap rewrite
-SELECT count(data) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT count(data) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
count
-------
0
@@ -134,7 +134,7 @@ INSERT INTO replication_example(somedata, somenum, zaphod2) VALUES (6, 3, 1);
INSERT INTO replication_example(somedata, somenum, zaphod1) VALUES (6, 4, 2);
COMMIT;
-- show changes
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -154,7 +154,7 @@ INSERT INTO replication_example(id, somedata, somenum) SELECT i, i, i FROM gener
ON CONFLICT (id) DO UPDATE SET somenum = excluded.somenum + 1;
COMMIT;
/* display results */
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
--------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -242,7 +242,7 @@ CREATE TABLE tr_unique(id2 serial unique NOT NULL, data int);
INSERT INTO tr_unique(data) VALUES(10);
ALTER TABLE tr_unique RENAME TO tr_pkey;
ALTER TABLE tr_pkey ADD COLUMN id serial primary key;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-rewrites', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-rewrites', '1', 'include-sequences', '0');
data
-----------------------------------------------------------------------------
BEGIN
@@ -257,7 +257,7 @@ INSERT INTO tr_pkey(data) VALUES(1);
--show deletion with primary key
DELETE FROM tr_pkey;
/* display results */
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
----------------------------------------------------------------------------
BEGIN
@@ -288,7 +288,7 @@ INSERT INTO tr_oddlength VALUES('ab', 'foo');
COMMIT;
/* display results, but hide most of the output */
SELECT count(*), min(data), max(data)
-FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0')
GROUP BY substring(data, 1, 24)
ORDER BY 1,2;
count | min | max
@@ -310,7 +310,7 @@ DELETE FROM spoolme;
DROP TABLE spoolme;
COMMIT;
SELECT data
-FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0')
WHERE data ~ 'UPDATE';
data
-------------------------------------------------------------------------------------------------------------
@@ -324,7 +324,7 @@ INSERT INTO tr_etoomuch (id, data)
SELECT g.i, -g.i FROM generate_series(8000, 12000) g(i)
ON CONFLICT(id) DO UPDATE SET data = EXCLUDED.data;
SELECT substring(data, 1, 29), count(*)
-FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1') WITH ORDINALITY
+FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0') WITH ORDINALITY
GROUP BY 1
ORDER BY min(ordinality);
substring | count
@@ -355,7 +355,7 @@ RELEASE SAVEPOINT c;
INSERT INTO tr_sub(path) VALUES ('1-top-2-#1');
RELEASE SAVEPOINT b;
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
----------------------------------------------------------------------
BEGIN
@@ -394,7 +394,7 @@ INSERT INTO tr_sub(path) VALUES ('2-top-1...--#3');
RELEASE SAVEPOINT subtop;
INSERT INTO tr_sub(path) VALUES ('2-top-#1');
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
------------------------------------------------------------------------
BEGIN
@@ -415,7 +415,7 @@ INSERT INTO tr_sub(path) VALUES ('3-top-2-2-#1');
ROLLBACK TO SAVEPOINT b;
INSERT INTO tr_sub(path) VALUES ('3-top-2-#2');
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
-----------------------------------------------------------------------
BEGIN
@@ -444,7 +444,7 @@ BEGIN;
SAVEPOINT a;
INSERT INTO tr_sub(path) VALUES ('5-top-1-#1');
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
---------------------------------------------------------------------
BEGIN
@@ -465,7 +465,7 @@ ROLLBACK TO SAVEPOINT a;
ALTER TABLE tr_sub_ddl ALTER COLUMN data TYPE bigint;
INSERT INTO tr_sub_ddl VALUES(43);
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
--------------------------------------------------
BEGIN
@@ -542,7 +542,7 @@ Options: user_catalog_table=false
INSERT INTO replication_metadata(relation, options)
VALUES ('zaphod', NULL);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -659,7 +659,7 @@ INSERT INTO toasttable(toasted_col2) SELECT repeat(string_agg(to_char(g.i, 'FM00
UPDATE toasttable
SET toasted_col1 = (SELECT string_agg(g.i::text, '') FROM generate_series(1, 2000) g(i))
WHERE id = 1;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -814,7 +814,7 @@ UPDATE toasttable
WHERE id = 1;
-- make sure we decode correctly even if the toast table is gone
DROP TABLE toasttable;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -826,7 +826,7 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'inc
(6 rows)
-- done, free logical replication slot
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
------
(0 rows)
diff --git a/contrib/test_decoding/expected/decoding_in_xact.out b/contrib/test_decoding/expected/decoding_in_xact.out
index b65253f4630..0816c780fe1 100644
--- a/contrib/test_decoding/expected/decoding_in_xact.out
+++ b/contrib/test_decoding/expected/decoding_in_xact.out
@@ -58,7 +58,7 @@ SELECT pg_current_xact_id() = '0';
-- don't show yet, haven't committed
INSERT INTO nobarf(data) VALUES('2');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
-----------------------------------------------------------
BEGIN
@@ -68,7 +68,7 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'inc
COMMIT;
INSERT INTO nobarf(data) VALUES('3');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
-----------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/decoding_into_rel.out b/contrib/test_decoding/expected/decoding_into_rel.out
index 8fd3390066d..03966b8b1ca 100644
--- a/contrib/test_decoding/expected/decoding_into_rel.out
+++ b/contrib/test_decoding/expected/decoding_into_rel.out
@@ -19,7 +19,7 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'inc
CREATE TABLE somechange(id serial primary key);
INSERT INTO somechange DEFAULT VALUES;
CREATE TABLE changeresult AS
- SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT * FROM changeresult;
data
------------------------------------------------
@@ -29,9 +29,9 @@ SELECT * FROM changeresult;
(3 rows)
INSERT INTO changeresult
- SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
INSERT INTO changeresult
- SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT * FROM changeresult;
data
--------------------------------------------------------------------------------------------------------------------------------------------------
@@ -63,7 +63,7 @@ DROP TABLE somechange;
CREATE FUNCTION slot_changes_wrapper(slot_name name) RETURNS SETOF TEXT AS $$
BEGIN
RETURN QUERY
- SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
END$$ LANGUAGE plpgsql;
SELECT * FROM slot_changes_wrapper('regression_slot');
slot_changes_wrapper
@@ -84,7 +84,7 @@ SELECT * FROM slot_changes_wrapper('regression_slot');
COMMIT
(14 rows)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/mxact.out b/contrib/test_decoding/expected/mxact.out
index 03ad3df0999..c5bc26c8044 100644
--- a/contrib/test_decoding/expected/mxact.out
+++ b/contrib/test_decoding/expected/mxact.out
@@ -7,7 +7,7 @@ step s0init: SELECT 'init' FROM pg_create_logical_replication_slot('isolation_sl
init
(1 row)
-step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
----
(0 rows)
@@ -27,7 +27,7 @@ t
(1 row)
step s0w: INSERT INTO do_write DEFAULT VALUES;
-step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
--------------------------------------------
BEGIN
@@ -50,7 +50,7 @@ step s0init: SELECT 'init' FROM pg_create_logical_replication_slot('isolation_sl
init
(1 row)
-step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
----
(0 rows)
@@ -71,7 +71,7 @@ t
step s0alter: ALTER TABLE do_write ADD column ts timestamptz;
step s0w: INSERT INTO do_write DEFAULT VALUES;
-step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s0start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/ondisk_startup.out b/contrib/test_decoding/expected/ondisk_startup.out
index bc7ff071648..3d2fa4a92e0 100644
--- a/contrib/test_decoding/expected/ondisk_startup.out
+++ b/contrib/test_decoding/expected/ondisk_startup.out
@@ -35,7 +35,7 @@ init
step s2c: COMMIT;
step s1insert: INSERT INTO do_write DEFAULT VALUES;
step s1checkpoint: CHECKPOINT;
-step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
--------------------------------------------------------------------
BEGIN
@@ -46,7 +46,7 @@ COMMIT
step s1insert: INSERT INTO do_write DEFAULT VALUES;
step s1alter: ALTER TABLE do_write ADD COLUMN addedbys1 int;
step s1insert: INSERT INTO do_write DEFAULT VALUES;
-step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');
+step s1start: SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');
data
--------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/replorigin.out b/contrib/test_decoding/expected/replorigin.out
index 2e9ef7c823b..7468c24f2b4 100644
--- a/contrib/test_decoding/expected/replorigin.out
+++ b/contrib/test_decoding/expected/replorigin.out
@@ -72,9 +72,9 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_d
-- origin tx
INSERT INTO origin_tbl(data) VALUES ('will be replicated and decoded and decoded again');
INSERT INTO target_tbl(data)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- as is normal, the insert into target_tbl shows up
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -110,7 +110,7 @@ SELECT pg_replication_origin_xact_setup('0/aabbccdd', '2013-01-01 00:00');
(1 row)
INSERT INTO target_tbl(data)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'only-local', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'only-local', '1', 'include-sequences', '0');
COMMIT;
-- check replication progress for the session is correct
SELECT pg_replication_origin_session_progress(false);
@@ -154,14 +154,14 @@ SELECT pg_replication_origin_progress('regress_test_decoding: regression_slot',
SELECT pg_replication_origin_session_reset();
ERROR: no replication origin is configured
-- and magically the replayed xact will be filtered!
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'only-local', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'only-local', '1', 'include-sequences', '0');
data
------
(0 rows)
--but new original changes still show up
INSERT INTO origin_tbl(data) VALUES ('will be replicated');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'only-local', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'only-local', '1', 'include-sequences', '0');
data
--------------------------------------------------------------------------------
BEGIN
@@ -227,7 +227,7 @@ SELECT local_id, external_id,
1 | regress_test_decoding: regression_slot_no_lsn | f | t
(1 row)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot_no_lsn', NULL, NULL, 'skip-empty-xacts', '1', 'include-xids', '0');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot_no_lsn', NULL, NULL, 'skip-empty-xacts', '1', 'include-xids', '0', 'include-sequences', '0');
data
-------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/rewrite.out b/contrib/test_decoding/expected/rewrite.out
index b30999c436b..5d15b192edc 100644
--- a/contrib/test_decoding/expected/rewrite.out
+++ b/contrib/test_decoding/expected/rewrite.out
@@ -64,7 +64,7 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_d
CREATE TABLE replication_example(id SERIAL PRIMARY KEY, somedata int, text varchar(120));
INSERT INTO replication_example(somedata) VALUES (1);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
----------------------------------------------------------------------------------------------------------
BEGIN
@@ -115,7 +115,7 @@ INSERT INTO replication_example(somedata, testcolumn1, testcolumn3) VALUES (7, 5
COMMIT;
-- make old files go away
CHECKPOINT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -141,7 +141,7 @@ VACUUM FULL pg_proc; VACUUM FULL pg_description; VACUUM FULL pg_shdescription; V
INSERT INTO replication_example(somedata, testcolumn1, testcolumn3) VALUES (8, 6, 1);
VACUUM FULL pg_proc; VACUUM FULL pg_description; VACUUM FULL pg_shdescription; VACUUM FULL iamalargetable;
INSERT INTO replication_example(somedata, testcolumn1, testcolumn3) VALUES (9, 7, 1);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/sequence.out b/contrib/test_decoding/expected/sequence.out
new file mode 100644
index 00000000000..dc1bf670122
--- /dev/null
+++ b/contrib/test_decoding/expected/sequence.out
@@ -0,0 +1,325 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+ ?column?
+----------
+ init
+(1 row)
+
+CREATE SEQUENCE test_sequence;
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4
+(1 row)
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 14
+(1 row)
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 4000
+(1 row)
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, true);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3510
+(1 row)
+
+SELECT setval('test_sequence', 3500, false);
+ setval
+--------
+ 3500
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3500
+(1 row)
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ data
+----------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 1 log_cnt: 0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value: 33 log_cnt: 0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 4 log_cnt: 0 is_called:1
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value: 334 log_cnt: 0 is_called:1
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 14 log_cnt: 32 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 14 log_cnt: 0 is_called:1
+ COMMIT
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 4000 log_cnt: 0 is_called:0
+ COMMIT
+ sequence public.test_sequence: transactional:0 last_value: 4320 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value: 3500 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value: 3830 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value: 3500 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value: 3830 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:0 last_value: 3500 log_cnt: 0 is_called:0
+ sequence public.test_sequence: transactional:0 last_value: 3820 log_cnt: 0 is_called:1
+(24 rows)
+
+DROP SEQUENCE test_sequence;
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 2
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3002
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3003
+(1 row)
+
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 6001
+(1 row)
+
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ data
+------
+(0 rows)
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ data
+------
+(0 rows)
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3003 | 30 | t
+(1 row)
+
+SELECT nextval('test_table_a_seq');
+ nextval
+---------
+ 3004
+(1 row)
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ data
+-------------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 last_value: 1 log_cnt: 0 is_called:0
+ COMMIT
+ sequence public.test_table_a_seq: transactional:0 last_value: 33 log_cnt: 0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value: 3000 log_cnt: 0 is_called:1
+ sequence public.test_table_a_seq: transactional:0 last_value: 3033 log_cnt: 0 is_called:1
+(6 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ data
+-----------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_table_a_seq: transactional:1 last_value: 1 log_cnt: 0 is_called:0
+ sequence public.test_table_a_seq: transactional:1 last_value: 33 log_cnt: 0 is_called:1
+ table public.test_table: INSERT: a[integer]:1 b[integer]:100
+ table public.test_table: INSERT: a[integer]:2 b[integer]:200
+ COMMIT
+(6 rows)
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 1
+(1 row)
+
+SELECT setval('test_sequence', 3000);
+ setval
+--------
+ 3000
+(1 row)
+
+SELECT nextval('test_sequence');
+ nextval
+---------
+ 3001
+(1 row)
+
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ setval
+--------
+ 5000
+(1 row)
+
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+ last_value | log_cnt | is_called
+------------+---------+-----------
+ 3001 | 32 | t
+(1 row)
+
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ data
+----------------------------------------------------------------------------------------
+ BEGIN
+ sequence public.test_sequence: transactional:1 last_value: 1 log_cnt: 0 is_called:0
+ sequence public.test_sequence: transactional:1 last_value: 33 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:1 last_value: 3000 log_cnt: 0 is_called:1
+ sequence public.test_sequence: transactional:1 last_value: 3033 log_cnt: 0 is_called:1
+ COMMIT
+(6 rows)
+
+SELECT pg_drop_replication_slot('regression_slot');
+ pg_drop_replication_slot
+--------------------------
+
+(1 row)
+
diff --git a/contrib/test_decoding/expected/slot.out b/contrib/test_decoding/expected/slot.out
index 63a9940f73a..de59d544ae8 100644
--- a/contrib/test_decoding/expected/slot.out
+++ b/contrib/test_decoding/expected/slot.out
@@ -95,7 +95,7 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot2', 'test_
(1 row)
INSERT INTO replication_example(somedata, text) VALUES (1, 3);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
---------------------------------------------------------------------------------------------------------
BEGIN
@@ -107,7 +107,7 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'in
COMMIT
(7 rows)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
---------------------------------------------------------------------------------------------------------
BEGIN
@@ -132,7 +132,7 @@ SELECT :'wal_lsn' = :'end_lsn';
t
(1 row)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
---------------------------------------------------------------------------------------------------------
BEGIN
@@ -140,7 +140,7 @@ SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'in
COMMIT
(3 rows)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
------
(0 rows)
diff --git a/contrib/test_decoding/expected/toast.out b/contrib/test_decoding/expected/toast.out
index a757e7dc8d5..d0ab0e06394 100644
--- a/contrib/test_decoding/expected/toast.out
+++ b/contrib/test_decoding/expected/toast.out
@@ -52,7 +52,7 @@ CREATE TABLE toasted_copy (
);
ALTER TABLE toasted_copy ALTER COLUMN data SET STORAGE EXTERNAL;
\copy toasted_copy FROM STDIN
-SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
substr
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -316,7 +316,7 @@ SELECT pg_column_size(toasted_key) > 2^16 FROM toasted_several;
t
(1 row)
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
regexp_replace
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -327,7 +327,7 @@ SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_sl
-- test update of a toasted key without changing it
UPDATE toasted_several SET toasted_col1 = toasted_key;
UPDATE toasted_several SET toasted_col2 = toasted_col1;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
regexp_replace
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
@@ -350,7 +350,7 @@ UPDATE toasted_several SET toasted_col1 = toasted_col2 WHERE id = 1;
DELETE FROM toasted_several WHERE id = 1;
COMMIT;
DROP TABLE toasted_several;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0')
WHERE data NOT LIKE '%INSERT: %';
regexp_replace
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
@@ -373,7 +373,7 @@ INSERT INTO tbl1 VALUES(1, repeat('a', 4000)) ;
ALTER TABLE tbl1 ADD COLUMN id serial primary key;
INSERT INTO tbl2 VALUES(1);
commit;
-SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
substr
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/expected/truncate.out b/contrib/test_decoding/expected/truncate.out
index e64d377214a..961b0ebdc89 100644
--- a/contrib/test_decoding/expected/truncate.out
+++ b/contrib/test_decoding/expected/truncate.out
@@ -11,7 +11,7 @@ CREATE TABLE tab2 (a int primary key, b int);
TRUNCATE tab1;
TRUNCATE tab1, tab1 RESTART IDENTITY CASCADE;
TRUNCATE tab1, tab2;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
data
------------------------------------------------------
BEGIN
diff --git a/contrib/test_decoding/specs/mxact.spec b/contrib/test_decoding/specs/mxact.spec
index ea5b1aa2d67..19f3af14b30 100644
--- a/contrib/test_decoding/specs/mxact.spec
+++ b/contrib/test_decoding/specs/mxact.spec
@@ -13,7 +13,7 @@ teardown
session "s0"
setup { SET synchronous_commit=on; }
step "s0init" {SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot', 'test_decoding');}
-step "s0start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');}
+step "s0start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');}
step "s0alter" {ALTER TABLE do_write ADD column ts timestamptz; }
step "s0w" { INSERT INTO do_write DEFAULT VALUES; }
diff --git a/contrib/test_decoding/specs/ondisk_startup.spec b/contrib/test_decoding/specs/ondisk_startup.spec
index 96ce87f1a45..fb43cbc5e15 100644
--- a/contrib/test_decoding/specs/ondisk_startup.spec
+++ b/contrib/test_decoding/specs/ondisk_startup.spec
@@ -16,7 +16,7 @@ session "s1"
setup { SET synchronous_commit=on; }
step "s1init" {SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot', 'test_decoding');}
-step "s1start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false');}
+step "s1start" {SELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'include-xids', 'false', 'include-sequences', 'false');}
step "s1insert" { INSERT INTO do_write DEFAULT VALUES; }
step "s1checkpoint" { CHECKPOINT; }
step "s1alter" { ALTER TABLE do_write ADD COLUMN addedbys1 int; }
diff --git a/contrib/test_decoding/sql/ddl.sql b/contrib/test_decoding/sql/ddl.sql
index 4f76bed72c1..807bc56a451 100644
--- a/contrib/test_decoding/sql/ddl.sql
+++ b/contrib/test_decoding/sql/ddl.sql
@@ -19,7 +19,7 @@ SELECT pg_drop_replication_slot('regression_slot');
-- check that we're detecting a streaming rep slot used for logical decoding
SELECT 'init' FROM pg_create_physical_replication_slot('repl');
-SELECT data FROM pg_logical_slot_get_changes('repl', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('repl', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT pg_drop_replication_slot('repl');
@@ -64,11 +64,11 @@ ALTER TABLE replication_example RENAME COLUMN text TO somenum;
INSERT INTO replication_example(somedata, somenum) VALUES (4, 1);
-- collect all changes
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
ALTER TABLE replication_example ALTER COLUMN somenum TYPE int4 USING (somenum::int4);
-- check that this doesn't produce any changes from the heap rewrite
-SELECT count(data) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT count(data) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
INSERT INTO replication_example(somedata, somenum) VALUES (5, 1);
@@ -82,7 +82,7 @@ INSERT INTO replication_example(somedata, somenum, zaphod1) VALUES (6, 4, 2);
COMMIT;
-- show changes
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- ON CONFLICT DO UPDATE support
BEGIN;
@@ -91,7 +91,7 @@ INSERT INTO replication_example(id, somedata, somenum) SELECT i, i, i FROM gener
COMMIT;
/* display results */
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- MERGE support
BEGIN;
@@ -113,14 +113,14 @@ CREATE TABLE tr_unique(id2 serial unique NOT NULL, data int);
INSERT INTO tr_unique(data) VALUES(10);
ALTER TABLE tr_unique RENAME TO tr_pkey;
ALTER TABLE tr_pkey ADD COLUMN id serial primary key;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-rewrites', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-rewrites', '1', 'include-sequences', '0');
INSERT INTO tr_pkey(data) VALUES(1);
--show deletion with primary key
DELETE FROM tr_pkey;
/* display results */
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
/*
* check that disk spooling works (also for logical messages)
@@ -137,7 +137,7 @@ COMMIT;
/* display results, but hide most of the output */
SELECT count(*), min(data), max(data)
-FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0')
GROUP BY substring(data, 1, 24)
ORDER BY 1,2;
@@ -152,7 +152,7 @@ DROP TABLE spoolme;
COMMIT;
SELECT data
-FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0')
WHERE data ~ 'UPDATE';
-- check that a large, spooled, upsert works
@@ -161,7 +161,7 @@ SELECT g.i, -g.i FROM generate_series(8000, 12000) g(i)
ON CONFLICT(id) DO UPDATE SET data = EXCLUDED.data;
SELECT substring(data, 1, 29), count(*)
-FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1') WITH ORDINALITY
+FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0') WITH ORDINALITY
GROUP BY 1
ORDER BY min(ordinality);
@@ -189,7 +189,7 @@ INSERT INTO tr_sub(path) VALUES ('1-top-2-#1');
RELEASE SAVEPOINT b;
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- check that we handle xlog assignments correctly
BEGIN;
@@ -218,7 +218,7 @@ RELEASE SAVEPOINT subtop;
INSERT INTO tr_sub(path) VALUES ('2-top-#1');
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- make sure rollbacked subtransactions aren't decoded
BEGIN;
@@ -231,7 +231,7 @@ ROLLBACK TO SAVEPOINT b;
INSERT INTO tr_sub(path) VALUES ('3-top-2-#2');
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- test whether a known, but not yet logged toplevel xact, followed by a
-- subxact commit is handled correctly
@@ -250,7 +250,7 @@ INSERT INTO tr_sub(path) VALUES ('5-top-1-#1');
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- check that DDL in aborted subtransactions handled correctly
CREATE TABLE tr_sub_ddl(data int);
@@ -263,7 +263,7 @@ ALTER TABLE tr_sub_ddl ALTER COLUMN data TYPE bigint;
INSERT INTO tr_sub_ddl VALUES(43);
COMMIT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
/*
@@ -303,7 +303,7 @@ ALTER TABLE replication_metadata SET (user_catalog_table = false);
INSERT INTO replication_metadata(relation, options)
VALUES ('zaphod', NULL);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
/*
* check whether we handle updates/deletes correct with & without a pkey
@@ -414,7 +414,7 @@ UPDATE toasttable
SET toasted_col1 = (SELECT string_agg(g.i::text, '') FROM generate_series(1, 2000) g(i))
WHERE id = 1;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
INSERT INTO toasttable(toasted_col1) SELECT string_agg(g.i::text, '') FROM generate_series(1, 2000) g(i);
@@ -426,10 +426,10 @@ WHERE id = 1;
-- make sure we decode correctly even if the toast table is gone
DROP TABLE toasttable;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- done, free logical replication slot
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/sql/decoding_in_xact.sql b/contrib/test_decoding/sql/decoding_in_xact.sql
index 108782dc2e9..b343b745661 100644
--- a/contrib/test_decoding/sql/decoding_in_xact.sql
+++ b/contrib/test_decoding/sql/decoding_in_xact.sql
@@ -32,10 +32,10 @@ BEGIN;
SELECT pg_current_xact_id() = '0';
-- don't show yet, haven't committed
INSERT INTO nobarf(data) VALUES('2');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
COMMIT;
INSERT INTO nobarf(data) VALUES('3');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT 'stop' FROM pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/sql/decoding_into_rel.sql b/contrib/test_decoding/sql/decoding_into_rel.sql
index 1068cec5888..27d5d08879e 100644
--- a/contrib/test_decoding/sql/decoding_into_rel.sql
+++ b/contrib/test_decoding/sql/decoding_into_rel.sql
@@ -15,14 +15,14 @@ CREATE TABLE somechange(id serial primary key);
INSERT INTO somechange DEFAULT VALUES;
CREATE TABLE changeresult AS
- SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT * FROM changeresult;
INSERT INTO changeresult
- SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
INSERT INTO changeresult
- SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT * FROM changeresult;
DROP TABLE changeresult;
@@ -32,11 +32,11 @@ DROP TABLE somechange;
CREATE FUNCTION slot_changes_wrapper(slot_name name) RETURNS SETOF TEXT AS $$
BEGIN
RETURN QUERY
- SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+ SELECT data FROM pg_logical_slot_peek_changes(slot_name, NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
END$$ LANGUAGE plpgsql;
SELECT * FROM slot_changes_wrapper('regression_slot');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT 'stop' FROM pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/sql/replorigin.sql b/contrib/test_decoding/sql/replorigin.sql
index 2e28a487773..cd0a370208e 100644
--- a/contrib/test_decoding/sql/replorigin.sql
+++ b/contrib/test_decoding/sql/replorigin.sql
@@ -41,10 +41,10 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_d
-- origin tx
INSERT INTO origin_tbl(data) VALUES ('will be replicated and decoded and decoded again');
INSERT INTO target_tbl(data)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- as is normal, the insert into target_tbl shows up
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
INSERT INTO origin_tbl(data) VALUES ('will be replicated, but not decoded again');
@@ -60,7 +60,7 @@ BEGIN;
-- setup transaction origin
SELECT pg_replication_origin_xact_setup('0/aabbccdd', '2013-01-01 00:00');
INSERT INTO target_tbl(data)
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'only-local', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'only-local', '1', 'include-sequences', '0');
COMMIT;
-- check replication progress for the session is correct
@@ -79,11 +79,11 @@ SELECT pg_replication_origin_progress('regress_test_decoding: regression_slot',
SELECT pg_replication_origin_session_reset();
-- and magically the replayed xact will be filtered!
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'only-local', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'only-local', '1', 'include-sequences', '0');
--but new original changes still show up
INSERT INTO origin_tbl(data) VALUES ('will be replicated');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'only-local', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'only-local', '1', 'include-sequences', '0');
SELECT pg_drop_replication_slot('regression_slot');
SELECT pg_replication_origin_drop('regress_test_decoding: regression_slot');
@@ -114,7 +114,7 @@ SELECT local_id, external_id,
remote_lsn <> '0/0' AS valid_remote_lsn,
local_lsn <> '0/0' AS valid_local_lsn
FROM pg_replication_origin_status;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot_no_lsn', NULL, NULL, 'skip-empty-xacts', '1', 'include-xids', '0');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot_no_lsn', NULL, NULL, 'skip-empty-xacts', '1', 'include-xids', '0', 'include-sequences', '0');
-- Clean up
SELECT pg_replication_origin_session_reset();
SELECT pg_drop_replication_slot('regression_slot_no_lsn');
diff --git a/contrib/test_decoding/sql/rewrite.sql b/contrib/test_decoding/sql/rewrite.sql
index 62dead3a9b1..1715bd289dd 100644
--- a/contrib/test_decoding/sql/rewrite.sql
+++ b/contrib/test_decoding/sql/rewrite.sql
@@ -35,7 +35,7 @@ SELECT pg_relation_size((SELECT reltoastrelid FROM pg_class WHERE oid = 'pg_shde
SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
CREATE TABLE replication_example(id SERIAL PRIMARY KEY, somedata int, text varchar(120));
INSERT INTO replication_example(somedata) VALUES (1);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
BEGIN;
INSERT INTO replication_example(somedata) VALUES (2);
@@ -90,7 +90,7 @@ COMMIT;
-- make old files go away
CHECKPOINT;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- trigger repeated rewrites of a system catalog with a toast table,
-- that previously was buggy: 20180914021046.oi7dm4ra3ot2g2kt@alap3.anarazel.de
@@ -98,7 +98,7 @@ VACUUM FULL pg_proc; VACUUM FULL pg_description; VACUUM FULL pg_shdescription; V
INSERT INTO replication_example(somedata, testcolumn1, testcolumn3) VALUES (8, 6, 1);
VACUUM FULL pg_proc; VACUUM FULL pg_description; VACUUM FULL pg_shdescription; VACUUM FULL iamalargetable;
INSERT INTO replication_example(somedata, testcolumn1, testcolumn3) VALUES (9, 7, 1);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT pg_drop_replication_slot('regression_slot');
DROP TABLE IF EXISTS replication_example;
diff --git a/contrib/test_decoding/sql/sequence.sql b/contrib/test_decoding/sql/sequence.sql
new file mode 100644
index 00000000000..978ee8088e1
--- /dev/null
+++ b/contrib/test_decoding/sql/sequence.sql
@@ -0,0 +1,119 @@
+-- predictability
+SET synchronous_commit = on;
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
+
+CREATE SEQUENCE test_sequence;
+
+-- test the sequence changes by several nextval() calls
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several ALTER commands
+ALTER SEQUENCE test_sequence INCREMENT BY 10;
+SELECT nextval('test_sequence');
+
+ALTER SEQUENCE test_sequence START WITH 3000;
+ALTER SEQUENCE test_sequence MAXVALUE 10000;
+ALTER SEQUENCE test_sequence RESTART WITH 4000;
+SELECT nextval('test_sequence');
+
+-- test the sequence changes by several setval() calls
+SELECT setval('test_sequence', 3500);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, true);
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3500, false);
+SELECT nextval('test_sequence');
+
+-- show results and drop sequence
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+DROP SEQUENCE test_sequence;
+
+-- rollback on sequence creation and update
+BEGIN;
+CREATE SEQUENCE test_sequence;
+CREATE TABLE test_table (a INT);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+SELECT nextval('test_sequence');
+ALTER SEQUENCE test_sequence RESTART WITH 6000;
+INSERT INTO test_table VALUES( (SELECT nextval('test_sequence')) );
+SELECT nextval('test_sequence');
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+
+-- rollback on table creation with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+
+-- rollback on table with serial column
+CREATE TABLE test_table (a SERIAL, b INT);
+
+BEGIN;
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+INSERT INTO test_table (b) VALUES (300);
+SELECT setval('test_table_a_seq', 3000);
+INSERT INTO test_table (b) VALUES (400);
+INSERT INTO test_table (b) VALUES (500);
+INSERT INTO test_table (b) VALUES (600);
+ALTER SEQUENCE test_table_a_seq RESTART WITH 6000;
+INSERT INTO test_table (b) VALUES (700);
+INSERT INTO test_table (b) VALUES (800);
+INSERT INTO test_table (b) VALUES (900);
+ROLLBACK;
+
+-- check table and sequence values after rollback
+SELECT * from test_table_a_seq;
+SELECT nextval('test_table_a_seq');
+
+DROP TABLE test_table;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE TABLE test_table (a SERIAL, b INT);
+INSERT INTO test_table (b) VALUES (100);
+INSERT INTO test_table (b) VALUES (200);
+SAVEPOINT a;
+INSERT INTO test_table (b) VALUES (300);
+ROLLBACK TO SAVEPOINT a;
+DROP TABLE test_table;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+
+-- savepoint test on table with serial column
+BEGIN;
+CREATE SEQUENCE test_sequence;
+SELECT nextval('test_sequence');
+SELECT setval('test_sequence', 3000);
+SELECT nextval('test_sequence');
+SAVEPOINT a;
+ALTER SEQUENCE test_sequence START WITH 7000;
+SELECT setval('test_sequence', 5000);
+ROLLBACK TO SAVEPOINT a;
+SELECT * FROM test_sequence;
+DROP SEQUENCE test_sequence;
+COMMIT;
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+
+SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/sql/slot.sql b/contrib/test_decoding/sql/slot.sql
index 1aa27c56674..52a740c43de 100644
--- a/contrib/test_decoding/sql/slot.sql
+++ b/contrib/test_decoding/sql/slot.sql
@@ -50,8 +50,8 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot2', 'test_
INSERT INTO replication_example(somedata, text) VALUES (1, 3);
-SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
INSERT INTO replication_example(somedata, text) VALUES (1, 4);
INSERT INTO replication_example(somedata, text) VALUES (1, 5);
@@ -65,8 +65,8 @@ SELECT slot_name FROM pg_replication_slot_advance('regression_slot2', pg_current
SELECT :'wal_lsn' = :'end_lsn';
-SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
-SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
DROP TABLE replication_example;
diff --git a/contrib/test_decoding/sql/toast.sql b/contrib/test_decoding/sql/toast.sql
index d1c560a174d..f5d4fc082ed 100644
--- a/contrib/test_decoding/sql/toast.sql
+++ b/contrib/test_decoding/sql/toast.sql
@@ -264,7 +264,7 @@ ALTER TABLE toasted_copy ALTER COLUMN data SET STORAGE EXTERNAL;
202 untoasted199
203 untoasted200
\.
-SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- test we can decode "old" tuples bigger than the max heap tuple size correctly
DROP TABLE IF EXISTS toasted_several;
@@ -287,13 +287,13 @@ UPDATE pg_attribute SET attstorage = 'x' WHERE attrelid = 'toasted_several_pkey'
INSERT INTO toasted_several(toasted_key) VALUES(repeat('9876543210', 10000));
SELECT pg_column_size(toasted_key) > 2^16 FROM toasted_several;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
-- test update of a toasted key without changing it
UPDATE toasted_several SET toasted_col1 = toasted_key;
UPDATE toasted_several SET toasted_col2 = toasted_col1;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
/*
* update with large tuplebuf, in a transaction large enough to force to spool to disk
@@ -306,7 +306,7 @@ COMMIT;
DROP TABLE toasted_several;
-SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')
+SELECT regexp_replace(data, '^(.{100}).*(.{100})$', '\1..\2') FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0')
WHERE data NOT LIKE '%INSERT: %';
/*
@@ -322,6 +322,6 @@ INSERT INTO tbl1 VALUES(1, repeat('a', 4000)) ;
ALTER TABLE tbl1 ADD COLUMN id serial primary key;
INSERT INTO tbl2 VALUES(1);
commit;
-SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/sql/truncate.sql b/contrib/test_decoding/sql/truncate.sql
index 5633854e0df..b0f36fd681b 100644
--- a/contrib/test_decoding/sql/truncate.sql
+++ b/contrib/test_decoding/sql/truncate.sql
@@ -10,5 +10,5 @@ TRUNCATE tab1;
TRUNCATE tab1, tab1 RESTART IDENTITY CASCADE;
TRUNCATE tab1, tab2;
-SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');
+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences', '0');
SELECT pg_drop_replication_slot('regression_slot');
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index 08d366a594e..851117b4155 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -35,6 +35,7 @@ typedef struct
bool include_timestamp;
bool skip_empty_xacts;
bool only_local;
+ bool include_sequences;
} TestDecodingData;
/*
@@ -76,6 +77,10 @@ static void pg_decode_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pg_decode_filter_prepare(LogicalDecodingContext *ctx,
TransactionId xid,
const char *gid);
@@ -116,6 +121,10 @@ static void pg_decode_stream_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pg_decode_stream_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation rel,
+ int64 last_value, int64 log_cnt, bool is_called);
static void pg_decode_stream_truncate(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
int nrelations, Relation relations[],
@@ -141,6 +150,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->filter_by_origin_cb = pg_decode_filter;
cb->shutdown_cb = pg_decode_shutdown;
cb->message_cb = pg_decode_message;
+ cb->sequence_cb = pg_decode_sequence;
cb->filter_prepare_cb = pg_decode_filter_prepare;
cb->begin_prepare_cb = pg_decode_begin_prepare_txn;
cb->prepare_cb = pg_decode_prepare_txn;
@@ -153,6 +163,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pg_decode_stream_commit;
cb->stream_change_cb = pg_decode_stream_change;
cb->stream_message_cb = pg_decode_stream_message;
+ cb->stream_sequence_cb = pg_decode_stream_sequence;
cb->stream_truncate_cb = pg_decode_stream_truncate;
}
@@ -173,6 +184,7 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
data->include_xids = true;
data->include_timestamp = false;
data->skip_empty_xacts = false;
+ data->include_sequences = true;
data->only_local = false;
ctx->output_plugin_private = data;
@@ -265,6 +277,17 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
errmsg("could not parse value \"%s\" for parameter \"%s\"",
strVal(elem->arg), elem->defname)));
}
+ else if (strcmp(elem->defname, "include-sequences") == 0)
+ {
+
+ if (elem->arg == NULL)
+ data->include_sequences = false;
+ else if (!parse_bool(strVal(elem->arg), &data->include_sequences))
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not parse value \"%s\" for parameter \"%s\"",
+ strVal(elem->arg), elem->defname)));
+ }
else
{
ereport(ERROR,
@@ -756,6 +779,34 @@ pg_decode_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+ TestDecodingTxnData *txndata = txn->output_plugin_private;
+
+ if (!data->include_sequences)
+ return;
+
+ /* output BEGIN if we haven't yet, but only for the transactional case */
+ if (data->skip_empty_xacts && !txndata->xact_wrote_changes)
+ {
+ pg_output_begin(ctx, data, txn, false);
+ }
+ txndata->xact_wrote_changes = true;
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": last_value: %zu log_cnt: %zu is_called:%d",
+ last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
static void
pg_decode_stream_start(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn)
@@ -955,6 +1006,36 @@ pg_decode_stream_message(LogicalDecodingContext *ctx,
OutputPluginWrite(ctx, true);
}
+static void
+pg_decode_stream_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ TestDecodingData *data = ctx->output_plugin_private;
+ TestDecodingTxnData *txndata = txn->output_plugin_private;
+
+ if (!data->include_sequences)
+ return;
+
+ /* output BEGIN if we haven't yet, but only for the transactional case */
+ {
+ if (data->skip_empty_xacts && !txndata->xact_wrote_changes)
+ {
+ pg_output_begin(ctx, data, txn, false);
+ }
+ txndata->xact_wrote_changes = true;
+ }
+
+ OutputPluginPrepareWrite(ctx, true);
+ appendStringInfoString(ctx->out, "streaming sequence ");
+ appendStringInfoString(ctx->out,
+ quote_qualified_identifier(get_namespace_name(get_rel_namespace(RelationGetRelid(rel))),
+ RelationGetRelationName(rel)));
+ appendStringInfo(ctx->out, ": last_value: " INT64_FORMAT " log_cnt: " INT64_FORMAT " is_called:%d",
+ last_value, log_cnt, is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* In streaming mode, we don't display the detailed information of Truncate.
* See pg_decode_stream_change.
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index e8f850a9c07..646ab74d049 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -6354,6 +6354,16 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
Reference to schema
</para></entry>
</row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pntype</structfield> <type>char</type>
+ Determines which object type is included from this schema.
+ </para>
+ <para>
+ Reference to schema
+ </para></entry>
+ </row>
</tbody>
</tgroup>
</table>
@@ -9689,6 +9699,11 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l
<entry>prepared transactions</entry>
</row>
+ <row>
+ <entry><link linkend="view-pg-publication-sequences"><structname>pg_publication_sequences</structname></link></entry>
+ <entry>publications and their associated sequences</entry>
+ </row>
+
<row>
<entry><link linkend="view-pg-publication-tables"><structname>pg_publication_tables</structname></link></entry>
<entry>publications and their associated tables</entry>
@@ -11626,6 +11641,72 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
</sect1>
+ <sect1 id="view-pg-publication-sequences">
+ <title><structname>pg_publication_sequences</structname></title>
+
+ <indexterm zone="view-pg-publication-sequences">
+ <primary>pg_publication_sequences</primary>
+ </indexterm>
+
+ <para>
+ The view <structname>pg_publication_sequences</structname> provides
+ information about the mapping between publications and the sequences they
+ contain. Unlike the underlying catalog
+ <link linkend="catalog-pg-publication-rel"><structname>pg_publication_rel</structname></link>,
+ this view expands
+ publications defined as <literal>FOR ALL SEQUENCES</literal>, so for such
+ publications there will be a row for each eligible sequence.
+ </para>
+
+ <table>
+ <title><structname>pg_publication_sequences</structname> Columns</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>pubname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-publication"><structname>pg_publication</structname></link>.<structfield>pubname</structfield>)
+ </para>
+ <para>
+ Name of publication
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>schemaname</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-namespace"><structname>pg_namespace</structname></link>.<structfield>nspname</structfield>)
+ </para>
+ <para>
+ Name of schema containing sequence
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>sequencename</structfield> <type>name</type>
+ (references <link linkend="catalog-pg-class"><structname>pg_class</structname></link>.<structfield>relname</structfield>)
+ </para>
+ <para>
+ Name of sequence
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </sect1>
+
<sect1 id="view-pg-publication-tables">
<title><structname>pg_publication_tables</structname></title>
diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml
index 8b2c31a87fa..a6ea6ff3fcf 100644
--- a/doc/src/sgml/logicaldecoding.sgml
+++ b/doc/src/sgml/logicaldecoding.sgml
@@ -458,6 +458,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
LogicalDecodeFilterPrepareCB filter_prepare_cb;
@@ -472,6 +473,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
@@ -481,9 +483,11 @@ typedef void (*LogicalOutputPluginInit) (struct OutputPluginCallbacks *cb);
and <function>commit_cb</function> callbacks are required,
while <function>startup_cb</function>,
<function>filter_by_origin_cb</function>, <function>truncate_cb</function>,
- and <function>shutdown_cb</function> are optional.
- If <function>truncate_cb</function> is not set but a
+ <function>sequence_cb</function>, and <function>shutdown_cb</function> are
+ optional. If <function>truncate_cb</function> is not set but a
<command>TRUNCATE</command> is to be decoded, the action will be ignored.
+ Similarly, if <function>sequence_cb</function> is not set and a sequence
+ change is to be decoded, the action will be ignored.
</para>
<para>
@@ -492,7 +496,8 @@ typedef void (*LogicalOutputPluginInit) (struct OutputPluginCallbacks *cb);
<function>stream_stop_cb</function>, <function>stream_abort_cb</function>,
<function>stream_commit_cb</function>, <function>stream_change_cb</function>,
and <function>stream_prepare_cb</function>
- are required, while <function>stream_message_cb</function> and
+ are required, while <function>stream_message_cb</function>,
+ <function>stream_sequence_cb</function>, and
<function>stream_truncate_cb</function> are optional.
</para>
@@ -808,6 +813,35 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
</para>
</sect3>
+ <sect3 id="logicaldecoding-output-plugin-sequence">
+ <title>Sequence Callback</title>
+
+ <para>
+ The optional <function>sequence_cb</function> callback is called for
+ actions that update a sequence value.
+<programlisting>
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+</programlisting>
+ The <parameter>txn</parameter> parameter contains meta information about
+ the transaction the sequence change is part of. Note however that for
+ non-transactional increments, the transaction may be either NULL or not
+ NULL, depending on if the transaction already has an XID assigned.
+ The <parameter>sequence_lsn</parameter> has the WAL location of the
+ sequence update. <parameter>transactional</parameter> says if the
+ sequence has to be replayed as part of the transaction or directly.
+
+ The <parameter>last_value</parameter>, <parameter>log_cnt</parameter> and
+ <parameter>is_called</parameter> parameters describe the sequence change.
+ </para>
+ </sect3>
+
<sect3 id="logicaldecoding-output-plugin-filter-prepare">
<title>Prepare Filter Callback</title>
@@ -1017,6 +1051,26 @@ typedef void (*LogicalDecodeStreamMessageCB) (struct LogicalDecodingContext *ctx
</para>
</sect3>
+ <sect3 id="logicaldecoding-output-plugin-stream-sequence">
+ <title>Stream Sequence Callback</title>
+ <para>
+ The optional <function>stream_sequence_cb</function> callback is called
+ for actions that change a sequence in a block of streamed changes
+ (demarcated by <function>stream_start_cb</function> and
+ <function>stream_stop_cb</function> calls).
+<programlisting>
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ bool transactional,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+</programlisting>
+ </para>
+ </sect3>
+
<sect3 id="logicaldecoding-output-plugin-stream-truncate">
<title>Stream Truncate Callback</title>
<para>
@@ -1197,8 +1251,9 @@ OutputPluginWrite(ctx, true);
in-progress transactions. There are multiple required streaming callbacks
(<function>stream_start_cb</function>, <function>stream_stop_cb</function>,
<function>stream_abort_cb</function>, <function>stream_commit_cb</function>
- and <function>stream_change_cb</function>) and two optional callbacks
- (<function>stream_message_cb</function> and <function>stream_truncate_cb</function>).
+ and <function>stream_change_cb</function>) and multiple optional callbacks
+ (<function>stream_message_cb</function>, <function>stream_sequence_cb</function>,
+ and <function>stream_truncate_cb</function>).
Also, if streaming of two-phase commands is to be supported, then additional
callbacks must be provided. (See <xref linkend="logicaldecoding-two-phase-commits"/>
for details).
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index 75a7a76a558..98f0bc3cc34 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -7072,6 +7072,125 @@ Relation
</listitem>
</varlistentry>
+<varlistentry id="protocol-logicalrep-message-formats-Sequence">
+<term>
+Sequence
+</term>
+<listitem>
+<para>
+
+<variablelist>
+<varlistentry>
+<term>
+ Byte1('X')
+</term>
+<listitem>
+<para>
+ Identifies the message as a sequence message.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ Int32 (TransactionId)
+</term>
+<listitem>
+<para>
+ Xid of the transaction (only present for streamed transactions).
+ This field is available since protocol version 2.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ Int8(0)
+</term>
+<listitem>
+<para>
+ Flags; currently unused.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ Int64 (XLogRecPtr)
+</term>
+<listitem>
+<para>
+ The LSN of the sequence increment.
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ String
+</term>
+<listitem>
+<para>
+ Namespace (empty string for <literal>pg_catalog</literal>).
+</para>
+</listitem>
+</varlistentry>
+<varlistentry>
+<term>
+ String
+</term>
+<listitem>
+<para>
+ Relation name.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int8
+</term>
+<listitem>
+<para>
+ 1 if the sequence update is transactional, 0 otherwise.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int64
+</term>
+<listitem>
+<para>
+ <structfield>last_value</structfield> value of the sequence.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int64
+</term>
+<listitem>
+<para>
+ <structfield>log_cnt</structfield> value of the sequence.
+</para>
+</listitem>
+</varlistentry>
+
+<varlistentry>
+<term>
+ Int8
+</term>
+<listitem>
+<para>
+ <structfield>is_called</structfield> value of the sequence.
+</para>
+</listitem>
+</varlistentry>
+
+</variablelist>
+</para>
+</listitem>
+</varlistentry>
+
<varlistentry id="protocol-logicalrep-message-formats-Type">
<term>
Type
diff --git a/doc/src/sgml/ref/alter_publication.sgml b/doc/src/sgml/ref/alter_publication.sgml
index e2cce49471b..db14d7a772c 100644
--- a/doc/src/sgml/ref/alter_publication.sgml
+++ b/doc/src/sgml/ref/alter_publication.sgml
@@ -31,7 +31,9 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -44,13 +46,13 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</para>
<para>
- The first three variants change which tables/schemas are part of the
- publication. The <literal>SET</literal> clause will replace the list of
- tables/schemas in the publication with the specified list; the existing
- tables/schemas that were present in the publication will be removed. The
- <literal>ADD</literal> and <literal>DROP</literal> clauses will add and
- remove one or more tables/schemas from the publication. Note that adding
- tables/schemas to a publication that is already subscribed to will require an
+ The first three variants change which objects (tables, sequences or schemas)
+ are part of the publication. The <literal>SET</literal> clause will replace
+ the list of objects in the publication with the specified list; the existing
+ objects that were present in the publication will be removed.
+ The <literal>ADD</literal> and <literal>DROP</literal> clauses will add and
+ remove one or more objects from the publication. Note that adding objects
+ to a publication that is already subscribed to will require an
<literal>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</literal> action on the
subscribing side in order to become effective. Note also that the combination
of <literal>DROP</literal> with a <literal>WHERE</literal> clause is not
@@ -130,6 +132,15 @@ ALTER PUBLICATION <replaceable class="parameter">name</replaceable> RENAME TO <r
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><replaceable class="parameter">sequence_name</replaceable></term>
+ <listitem>
+ <para>
+ Name of an existing sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><replaceable class="parameter">schema_name</replaceable></term>
<listitem>
diff --git a/doc/src/sgml/ref/alter_subscription.sgml b/doc/src/sgml/ref/alter_subscription.sgml
index 7c5203b6d3b..c1994e3a94a 100644
--- a/doc/src/sgml/ref/alter_subscription.sgml
+++ b/doc/src/sgml/ref/alter_subscription.sgml
@@ -150,8 +150,8 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
<listitem>
<para>
Fetch missing table information from publisher. This will start
- replication of tables that were added to the subscribed-to publications
- since <command>CREATE SUBSCRIPTION</command> or
+ replication of tables and sequences that were added to the subscribed-to
+ publications since <command>CREATE SUBSCRIPTION</command> or
the last invocation of <command>REFRESH PUBLICATION</command>.
</para>
@@ -169,8 +169,8 @@ ALTER SUBSCRIPTION <replaceable class="parameter">name</replaceable> RENAME TO <
The default is <literal>true</literal>.
</para>
<para>
- Previously subscribed tables are not copied, even if a table's row
- filter <literal>WHERE</literal> clause has since been modified.
+ Previously subscribed tables and sequences are not copied, even if a
+ table's row filter <literal>WHERE</literal> clause has since been modified.
</para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml
index fb2d013393b..d2739968d9c 100644
--- a/doc/src/sgml/ref/create_publication.sgml
+++ b/doc/src/sgml/ref/create_publication.sgml
@@ -22,14 +22,21 @@ PostgreSQL documentation
<refsynopsisdiv>
<synopsis>
CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
- [ FOR ALL TABLES
+ [ FOR ALL <replaceable class="parameter">object_type</replaceable> [, ...]
| FOR <replaceable class="parameter">publication_object</replaceable> [, ... ] ]
[ WITH ( <replaceable class="parameter">publication_parameter</replaceable> [= <replaceable class="parameter">value</replaceable>] [, ... ] ) ]
+<phrase>where <replaceable class="parameter">object type</replaceable> is one of:</phrase>
+
+ TABLES
+ SEQUENCES
+
<phrase>where <replaceable class="parameter">publication_object</replaceable> is one of:</phrase>
TABLE [ ONLY ] <replaceable class="parameter">table_name</replaceable> [ * ] [ ( <replaceable class="parameter">column_name</replaceable> [, ... ] ) ] [ WHERE ( <replaceable class="parameter">expression</replaceable> ) ] [, ... ]
+ SEQUENCE <replaceable class="parameter">sequence_name</replaceable> [ * ] [, ... ]
ALL TABLES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
+ ALL SEQUENCES IN SCHEMA { <replaceable class="parameter">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]
</synopsis>
</refsynopsisdiv>
@@ -114,27 +121,43 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>FOR SEQUENCE</literal></term>
+ <listitem>
+ <para>
+ Specifies a list of sequences to add to the publication.
+ </para>
+
+ <para>
+ Specifying a sequence that is part of a schema specified by <literal>FOR
+ ALL SEQUENCES IN SCHEMA</literal> is not supported.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry>
<term><literal>FOR ALL TABLES</literal></term>
+ <term><literal>FOR ALL SEQUENCES</literal></term>
<listitem>
<para>
- Marks the publication as one that replicates changes for all tables in
- the database, including tables created in the future.
+ Marks the publication as one that replicates changes for all tables/sequences in
+ the database, including tables/sequences created in the future.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>FOR ALL TABLES IN SCHEMA</literal></term>
+ <term><literal>FOR ALL SEQUENCES IN SCHEMA</literal></term>
<listitem>
<para>
- Marks the publication as one that replicates changes for all tables in
- the specified list of schemas, including tables created in the future.
+ Marks the publication as one that replicates changes for all sequences/tables in
+ the specified list of schemas, including sequences/tables created in the future.
</para>
<para>
- Specifying a schema along with a table which belongs to the specified
- schema using <literal>FOR TABLE</literal> is not supported.
+ Specifying a schema along with a sequence/table which belongs to the specified
+ schema using <literal>FOR SEQUENCE</literal>/<literal>FOR TABLE</literal> is not supported.
</para>
<para>
@@ -209,10 +232,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
<title>Notes</title>
<para>
- If <literal>FOR TABLE</literal>, <literal>FOR ALL TABLES</literal> or
- <literal>FOR ALL TABLES IN SCHEMA</literal> are not specified, then the
- publication starts out with an empty set of tables. That is useful if
- tables or schemas are to be added later.
+ If <literal>FOR TABLE</literal>, <literal>FOR SEQUENCE</literal>, etc. is
+ not specified, then the publication starts out with an empty set of tables
+ and sequences. That is useful if objects are to be added later.
</para>
<para>
@@ -227,10 +249,9 @@ CREATE PUBLICATION <replaceable class="parameter">name</replaceable>
</para>
<para>
- To add a table to a publication, the invoking user must have ownership
- rights on the table. The <command>FOR ALL TABLES</command> and
- <command>FOR ALL TABLES IN SCHEMA</command> clauses require the invoking
- user to be a superuser.
+ To add a table or a sequence to a publication, the invoking user must
+ have ownership rights on the object. The <command>FOR ALL ...</command>
+ clauses require the invoking user to be a superuser.
</para>
<para>
diff --git a/src/backend/access/rmgrdesc/logicalmsgdesc.c b/src/backend/access/rmgrdesc/logicalmsgdesc.c
index 099e11a84e7..de77dcb0dc2 100644
--- a/src/backend/access/rmgrdesc/logicalmsgdesc.c
+++ b/src/backend/access/rmgrdesc/logicalmsgdesc.c
@@ -40,6 +40,16 @@ logicalmsg_desc(StringInfo buf, XLogReaderState *record)
sep = " ";
}
}
+ else if (info == XLOG_LOGICAL_SEQUENCE)
+ {
+ xl_logical_sequence *xlrec = (xl_logical_sequence *) rec;
+
+ appendStringInfo(buf, "rel %u/%u/%u last: %lu log_cnt: %lu is_called: %d",
+ xlrec->node.spcNode,
+ xlrec->node.dbNode,
+ xlrec->node.relNode,
+ xlrec->last, xlrec->log_cnt, xlrec->is_called);
+ }
}
const char *
@@ -48,5 +58,8 @@ logicalmsg_identify(uint8 info)
if ((info & ~XLR_INFO_MASK) == XLOG_LOGICAL_MESSAGE)
return "MESSAGE";
+ if ((info & ~XLR_INFO_MASK) == XLOG_LOGICAL_SEQUENCE)
+ return "SEQUENCE";
+
return NULL;
}
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index bf2fc08d940..eb1727a5d4c 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -37,6 +37,7 @@
#include "catalog/storage.h"
#include "commands/async.h"
#include "commands/tablecmds.h"
+#include "commands/sequence.h"
#include "commands/trigger.h"
#include "common/pg_prng.h"
#include "executor/spi.h"
@@ -2237,6 +2238,15 @@ CommitTransaction(void)
/* Prevent cancel/die interrupt while cleaning up */
HOLD_INTERRUPTS();
+ /*
+ * Write state of sequences to WAL. We need to do this early enough so
+ * that we can still write stuff to WAL, but probably before changing
+ * the transaction state.
+ *
+ * XXX Should this happen before/after holding the interrupts?
+ */
+ AtEOXact_Sequences(true);
+
/* Commit updates to the relation map --- do this as late as possible */
AtEOXact_RelationMap(true, is_parallel_worker);
@@ -2513,6 +2523,15 @@ PrepareTransaction(void)
/* Prevent cancel/die interrupt while cleaning up */
HOLD_INTERRUPTS();
+ /*
+ * Write state of sequences to WAL. We need to do this early enough so
+ * that we can still write stuff to WAL, but probably before changing
+ * the transaction state.
+ *
+ * XXX Should this happen before/after holding the interrupts?
+ */
+ AtEOXact_Sequences(true);
+
/*
* set the current transaction state information appropriately during
* prepare processing
@@ -2748,6 +2767,12 @@ AbortTransaction(void)
TransStateAsString(s->state));
Assert(s->parent == NULL);
+ /*
+ * For transaction abort we don't need to write anything to WAL, we just
+ * do cleanup of the hash table etc.
+ */
+ AtEOXact_Sequences(false);
+
/*
* set the current transaction state information appropriately during the
* abort processing
diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c
index ac6043514f1..31c80f7209f 100644
--- a/src/backend/catalog/objectaddress.c
+++ b/src/backend/catalog/objectaddress.c
@@ -1941,12 +1941,14 @@ get_object_address_publication_schema(List *object, bool missing_ok)
char *pubname;
char *schemaname;
Oid schemaid;
+ char *objtype;
ObjectAddressSet(address, PublicationNamespaceRelationId, InvalidOid);
/* Fetch schema name and publication name from input list */
schemaname = strVal(linitial(object));
pubname = strVal(lsecond(object));
+ objtype = strVal(lthird(object));
schemaid = get_namespace_oid(schemaname, missing_ok);
if (!OidIsValid(schemaid))
@@ -1959,10 +1961,12 @@ get_object_address_publication_schema(List *object, bool missing_ok)
/* Find the publication schema mapping in syscache */
address.objectId =
- GetSysCacheOid2(PUBLICATIONNAMESPACEMAP,
+ GetSysCacheOid3(PUBLICATIONNAMESPACEMAP,
Anum_pg_publication_namespace_oid,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pub->oid));
+ ObjectIdGetDatum(pub->oid),
+ CharGetDatum(objtype[0]));
+
if (!OidIsValid(address.objectId) && !missing_ok)
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
@@ -2243,7 +2247,6 @@ pg_get_object_address(PG_FUNCTION_ARGS)
case OBJECT_DOMCONSTRAINT:
case OBJECT_CAST:
case OBJECT_USER_MAPPING:
- case OBJECT_PUBLICATION_NAMESPACE:
case OBJECT_PUBLICATION_REL:
case OBJECT_DEFACL:
case OBJECT_TRANSFORM:
@@ -2268,6 +2271,7 @@ pg_get_object_address(PG_FUNCTION_ARGS)
/* fall through to check args length */
/* FALLTHROUGH */
case OBJECT_OPERATOR:
+ case OBJECT_PUBLICATION_NAMESPACE:
if (list_length(args) != 2)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -2339,6 +2343,8 @@ pg_get_object_address(PG_FUNCTION_ARGS)
objnode = (Node *) list_make2(name, linitial(args));
break;
case OBJECT_PUBLICATION_NAMESPACE:
+ objnode = (Node *) list_make3(linitial(name), linitial(args), lsecond(args));
+ break;
case OBJECT_USER_MAPPING:
objnode = (Node *) list_make2(linitial(name), linitial(args));
break;
@@ -2894,11 +2900,12 @@ get_catalog_object_by_oid(Relation catalog, AttrNumber oidcol, Oid objectId)
*
* Get publication name and schema name from the object address into pubname and
* nspname. Both pubname and nspname are palloc'd strings which will be freed by
- * the caller.
+ * the caller. The last parameter specifies which object type is included from
+ * the schema.
*/
static bool
getPublicationSchemaInfo(const ObjectAddress *object, bool missing_ok,
- char **pubname, char **nspname)
+ char **pubname, char **nspname, char **objtype)
{
HeapTuple tup;
Form_pg_publication_namespace pnform;
@@ -2934,6 +2941,13 @@ getPublicationSchemaInfo(const ObjectAddress *object, bool missing_ok,
return false;
}
+ /*
+ * The type is always a single character, but we need to pass it as a string,
+ * so allocate two charaters and set the first one. The second one is \0.
+ */
+ *objtype = palloc0(2);
+ *objtype[0] = pnform->pntype;
+
ReleaseSysCache(tup);
return true;
}
@@ -3965,15 +3979,17 @@ getObjectDescription(const ObjectAddress *object, bool missing_ok)
{
char *pubname;
char *nspname;
+ char *objtype;
if (!getPublicationSchemaInfo(object, missing_ok,
- &pubname, &nspname))
+ &pubname, &nspname, &objtype))
break;
- appendStringInfo(&buffer, _("publication of schema %s in publication %s"),
- nspname, pubname);
+ appendStringInfo(&buffer, _("publication of schema %s in publication %s type %s"),
+ nspname, pubname, objtype);
pfree(pubname);
pfree(nspname);
+ pfree(objtype);
break;
}
@@ -5800,18 +5816,24 @@ getObjectIdentityParts(const ObjectAddress *object,
{
char *pubname;
char *nspname;
+ char *objtype;
if (!getPublicationSchemaInfo(object, missing_ok, &pubname,
- &nspname))
+ &nspname, &objtype))
break;
- appendStringInfo(&buffer, "%s in publication %s",
- nspname, pubname);
+ appendStringInfo(&buffer, "%s in publication %s type %s",
+ nspname, pubname, objtype);
if (objargs)
*objargs = list_make1(pubname);
else
pfree(pubname);
+ if (objargs)
+ *objargs = lappend(*objargs, objtype);
+ else
+ pfree(objtype);
+
if (objname)
*objname = list_make1(nspname);
else
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 2631558ff11..9fe3b189269 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -55,9 +55,10 @@ static void publication_translate_columns(Relation targetrel, List *columns,
static void
check_publication_add_relation(Relation targetrel)
{
- /* Must be a regular or partitioned table */
+ /* Must be a regular or partitioned table, or a sequence */
if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&
- RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)
+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE &&
+ RelationGetForm(targetrel)->relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("cannot add relation \"%s\" to publication",
@@ -134,7 +135,8 @@ static bool
is_publishable_class(Oid relid, Form_pg_class reltuple)
{
return (reltuple->relkind == RELKIND_RELATION ||
- reltuple->relkind == RELKIND_PARTITIONED_TABLE) &&
+ reltuple->relkind == RELKIND_PARTITIONED_TABLE ||
+ reltuple->relkind == RELKIND_SEQUENCE) &&
!IsCatalogRelationOid(relid) &&
reltuple->relpersistence == RELPERSISTENCE_PERMANENT &&
relid >= FirstNormalObjectId;
@@ -179,6 +181,52 @@ filter_partitions(List *relids)
return result;
}
+/*
+ * Check the character is a valid object type for schema publication.
+ *
+ * This recognizes either 't' for tables or 's' for sequences. Places that
+ * need to handle 'u' for unsupported relkinds need to do that explicitlyl
+ */
+static void
+AssertObjectTypeValid(char objectType)
+{
+#ifdef USE_ASSERT_CHECKING
+ Assert(objectType == PUB_OBJTYPE_SEQUENCE || objectType == PUB_OBJTYPE_TABLE);
+#endif
+}
+
+/*
+ * Determine object type matching a given a relkind value.
+ */
+char
+pub_get_object_type_for_relkind(char relkind)
+{
+ /* sequence maps directly to sequence relkind */
+ if (relkind == RELKIND_SEQUENCE)
+ return PUB_OBJTYPE_SEQUENCE;
+
+ /* for table, we match either regular or partitioned table */
+ if (relkind == RELKIND_RELATION ||
+ relkind == RELKIND_PARTITIONED_TABLE)
+ return PUB_OBJTYPE_TABLE;
+
+ return PUB_OBJTYPE_UNSUPPORTED;
+}
+
+/*
+ * Determine if publication object type matches the relkind.
+ *
+ * Returns true if the relation matches object type replicated by this schema,
+ * false otherwise.
+ */
+static bool
+pub_object_type_matches_relkind(char objectType, char relkind)
+{
+ AssertObjectTypeValid(objectType);
+
+ return (pub_get_object_type_for_relkind(relkind) == objectType);
+}
+
/*
* Another variant of this, taking a Relation.
*/
@@ -208,7 +256,7 @@ is_schema_publication(Oid pubid)
ObjectIdGetDatum(pubid));
scan = systable_beginscan(pubschsrel,
- PublicationNamespacePnnspidPnpubidIndexId,
+ PublicationNamespacePnnspidPnpubidPntypeIndexId,
true, NULL, 1, &scankey);
tup = systable_getnext(scan);
result = HeapTupleIsValid(tup);
@@ -316,7 +364,9 @@ GetTopMostAncestorInPublication(Oid puboid, List *ancestors, int *ancestor_level
}
else
{
- aschemaPubids = GetSchemaPublications(get_rel_namespace(ancestor));
+ /* we only search for ancestors of tables, so PUB_OBJTYPE_TABLE */
+ aschemaPubids = GetSchemaPublications(get_rel_namespace(ancestor),
+ PUB_OBJTYPE_TABLE);
if (list_member_oid(aschemaPubids, puboid))
{
topmost_relid = ancestor;
@@ -585,7 +635,7 @@ pub_collist_to_bitmapset(Bitmapset *columns, Datum pubcols, MemoryContext mcxt)
* Insert new publication / schema mapping.
*/
ObjectAddress
-publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
+publication_add_schema(Oid pubid, Oid schemaid, char objectType, bool if_not_exists)
{
Relation rel;
HeapTuple tup;
@@ -597,6 +647,8 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
ObjectAddress myself,
referenced;
+ AssertObjectTypeValid(objectType);
+
rel = table_open(PublicationNamespaceRelationId, RowExclusiveLock);
/*
@@ -604,9 +656,10 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
* duplicates, it's here just to provide nicer error message in common
* case. The real protection is the unique key on the catalog.
*/
- if (SearchSysCacheExists2(PUBLICATIONNAMESPACEMAP,
+ if (SearchSysCacheExists3(PUBLICATIONNAMESPACEMAP,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pubid)))
+ ObjectIdGetDatum(pubid),
+ CharGetDatum(objectType)))
{
table_close(rel, RowExclusiveLock);
@@ -632,6 +685,8 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
ObjectIdGetDatum(pubid);
values[Anum_pg_publication_namespace_pnnspid - 1] =
ObjectIdGetDatum(schemaid);
+ values[Anum_pg_publication_namespace_pntype - 1] =
+ CharGetDatum(objectType);
tup = heap_form_tuple(RelationGetDescr(rel), values, nulls);
@@ -657,7 +712,7 @@ publication_add_schema(Oid pubid, Oid schemaid, bool if_not_exists)
* publication_add_relation for why we need to consider all the
* partitions.
*/
- schemaRels = GetSchemaPublicationRelations(schemaid,
+ schemaRels = GetSchemaPublicationRelations(schemaid, objectType,
PUBLICATION_PART_ALL);
InvalidatePublicationRels(schemaRels);
@@ -691,11 +746,14 @@ GetRelationPublications(Oid relid)
/*
* Gets list of relation oids for a publication.
*
- * This should only be used FOR TABLE publications, the FOR ALL TABLES
- * should use GetAllTablesPublicationRelations().
+ * This should only be used FOR TABLE / FOR SEQUENCE publications, the FOR
+ * ALL TABLES / SEQUENCES should use GetAllTablesPublicationRelations()
+ * and GetAllSequencesPublicationRelations().
+ *
+ * XXX pub_partopt only matters for tables, not sequences.
*/
List *
-GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
+GetPublicationRelations(Oid pubid, char objectType, PublicationPartOpt pub_partopt)
{
List *result;
Relation pubrelsrel;
@@ -703,6 +761,8 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
SysScanDesc scan;
HeapTuple tup;
+ AssertObjectTypeValid(objectType);
+
/* Find all publications associated with the relation. */
pubrelsrel = table_open(PublicationRelRelationId, AccessShareLock);
@@ -717,11 +777,29 @@ GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
result = NIL;
while (HeapTupleIsValid(tup = systable_getnext(scan)))
{
+ char relkind;
Form_pg_publication_rel pubrel;
pubrel = (Form_pg_publication_rel) GETSTRUCT(tup);
- result = GetPubPartitionOptionRelations(result, pub_partopt,
- pubrel->prrelid);
+ relkind = get_rel_relkind(pubrel->prrelid);
+
+ /*
+ * If the relkind does not match the requested object type, ignore the
+ * relation. For example we might be interested only in sequences, so
+ * we ignore tables.
+ */
+ if (!pub_object_type_matches_relkind(objectType, relkind))
+ continue;
+
+ /*
+ * We don't have partitioned sequences, so just add them to the list.
+ * Otherwise consider adding all child relations, if requested.
+ */
+ if (relkind == RELKIND_SEQUENCE)
+ result = lappend_oid(result, pubrel->prrelid);
+ else
+ result = GetPubPartitionOptionRelations(result, pub_partopt,
+ pubrel->prrelid);
}
systable_endscan(scan);
@@ -771,6 +849,43 @@ GetAllTablesPublications(void)
return result;
}
+/*
+ * Gets list of publication oids for publications marked as FOR ALL SEQUENCES.
+ */
+List *
+GetAllSequencesPublications(void)
+{
+ List *result;
+ Relation rel;
+ ScanKeyData scankey;
+ SysScanDesc scan;
+ HeapTuple tup;
+
+ /* Find all publications that are marked as for all sequences. */
+ rel = table_open(PublicationRelationId, AccessShareLock);
+
+ ScanKeyInit(&scankey,
+ Anum_pg_publication_puballsequences,
+ BTEqualStrategyNumber, F_BOOLEQ,
+ BoolGetDatum(true));
+
+ scan = systable_beginscan(rel, InvalidOid, false,
+ NULL, 1, &scankey);
+
+ result = NIL;
+ while (HeapTupleIsValid(tup = systable_getnext(scan)))
+ {
+ Oid oid = ((Form_pg_publication) GETSTRUCT(tup))->oid;
+
+ result = lappend_oid(result, oid);
+ }
+
+ systable_endscan(scan);
+ table_close(rel, AccessShareLock);
+
+ return result;
+}
+
/*
* Gets list of all relation published by FOR ALL TABLES publication(s).
*
@@ -837,28 +952,38 @@ GetAllTablesPublicationRelations(bool pubviaroot)
/*
* Gets the list of schema oids for a publication.
*
- * This should only be used FOR ALL TABLES IN SCHEMA publications.
+ * This should only be used FOR ALL TABLES IN SCHEMA and FOR ALL SEQUENCES
+ * publications.
+ *
+ * 'objectType' determines whether to get FOR TABLE or FOR SEQUENCES schemas
*/
List *
-GetPublicationSchemas(Oid pubid)
+GetPublicationSchemas(Oid pubid, char objectType)
{
List *result = NIL;
Relation pubschsrel;
- ScanKeyData scankey;
+ ScanKeyData scankey[2];
SysScanDesc scan;
HeapTuple tup;
+ AssertObjectTypeValid(objectType);
+
/* Find all schemas associated with the publication */
pubschsrel = table_open(PublicationNamespaceRelationId, AccessShareLock);
- ScanKeyInit(&scankey,
+ ScanKeyInit(&scankey[0],
Anum_pg_publication_namespace_pnpubid,
BTEqualStrategyNumber, F_OIDEQ,
ObjectIdGetDatum(pubid));
+ ScanKeyInit(&scankey[1],
+ Anum_pg_publication_namespace_pntype,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(objectType));
+
scan = systable_beginscan(pubschsrel,
- PublicationNamespacePnnspidPnpubidIndexId,
- true, NULL, 1, &scankey);
+ PublicationNamespacePnnspidPnpubidPntypeIndexId,
+ true, NULL, 2, scankey);
while (HeapTupleIsValid(tup = systable_getnext(scan)))
{
Form_pg_publication_namespace pubsch;
@@ -876,14 +1001,26 @@ GetPublicationSchemas(Oid pubid)
/*
* Gets the list of publication oids associated with a specified schema.
+ *
+ * objectType specifies whether we're looking for schemas including tables or
+ * sequences.
+ *
+ * Note: relcache calls this for all object types, not just tables and sequences.
+ * Which is why we handle the PUB_OBJTYPE_UNSUPPORTED object type too.
*/
List *
-GetSchemaPublications(Oid schemaid)
+GetSchemaPublications(Oid schemaid, char objectType)
{
List *result = NIL;
CatCList *pubschlist;
int i;
+ /* unsupported object type */
+ if (objectType == PUB_OBJTYPE_UNSUPPORTED)
+ return result;
+
+ AssertObjectTypeValid(objectType);
+
/* Find all publications associated with the schema */
pubschlist = SearchSysCacheList1(PUBLICATIONNAMESPACEMAP,
ObjectIdGetDatum(schemaid));
@@ -891,6 +1028,11 @@ GetSchemaPublications(Oid schemaid)
{
HeapTuple tup = &pubschlist->members[i]->tuple;
Oid pubid = ((Form_pg_publication_namespace) GETSTRUCT(tup))->pnpubid;
+ char pntype = ((Form_pg_publication_namespace) GETSTRUCT(tup))->pntype;
+
+ /* Skip schemas publishing a different object type. */
+ if (pntype != objectType)
+ continue;
result = lappend_oid(result, pubid);
}
@@ -902,9 +1044,13 @@ GetSchemaPublications(Oid schemaid)
/*
* Get the list of publishable relation oids for a specified schema.
+ *
+ * objectType specifies whether this is FOR ALL TABLES IN SCHEMA or FOR ALL
+ * SEQUENCES IN SCHEMA
*/
List *
-GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
+GetSchemaPublicationRelations(Oid schemaid, char objectType,
+ PublicationPartOpt pub_partopt)
{
Relation classRel;
ScanKeyData key[1];
@@ -913,6 +1059,7 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
List *result = NIL;
Assert(OidIsValid(schemaid));
+ AssertObjectTypeValid(objectType);
classRel = table_open(RelationRelationId, AccessShareLock);
@@ -933,9 +1080,16 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
continue;
relkind = get_rel_relkind(relid);
- if (relkind == RELKIND_RELATION)
- result = lappend_oid(result, relid);
- else if (relkind == RELKIND_PARTITIONED_TABLE)
+
+ /* Skip if the relkind does not match FOR ALL TABLES / SEQUENCES. */
+ if (!pub_object_type_matches_relkind(objectType, relkind))
+ continue;
+
+ /*
+ * If the object is a partitioned table, lookup all the child relations
+ * (if requested). Otherwise just add the object to the list.
+ */
+ if (relkind == RELKIND_PARTITIONED_TABLE)
{
List *partitionrels = NIL;
@@ -948,7 +1102,11 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
pub_partopt,
relForm->oid);
result = list_concat_unique_oid(result, partitionrels);
+ continue;
}
+
+ /* non-partitioned tables and sequences */
+ result = lappend_oid(result, relid);
}
table_endscan(scan);
@@ -958,27 +1116,67 @@ GetSchemaPublicationRelations(Oid schemaid, PublicationPartOpt pub_partopt)
/*
* Gets the list of all relations published by FOR ALL TABLES IN SCHEMA
- * publication.
+ * or FOR ALL SEQUENCES IN SCHEMA publication.
*/
List *
-GetAllSchemaPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt)
+GetAllSchemaPublicationRelations(Oid pubid, char objectType,
+ PublicationPartOpt pub_partopt)
{
List *result = NIL;
- List *pubschemalist = GetPublicationSchemas(pubid);
+ List *pubschemalist = GetPublicationSchemas(pubid, objectType);
ListCell *cell;
+ AssertObjectTypeValid(objectType);
+
foreach(cell, pubschemalist)
{
Oid schemaid = lfirst_oid(cell);
List *schemaRels = NIL;
- schemaRels = GetSchemaPublicationRelations(schemaid, pub_partopt);
+ schemaRels = GetSchemaPublicationRelations(schemaid, objectType,
+ pub_partopt);
result = list_concat(result, schemaRels);
}
return result;
}
+/*
+ * Gets list of all relation published by FOR ALL SEQUENCES publication(s).
+ */
+List *
+GetAllSequencesPublicationRelations(void)
+{
+ Relation classRel;
+ ScanKeyData key[1];
+ TableScanDesc scan;
+ HeapTuple tuple;
+ List *result = NIL;
+
+ classRel = table_open(RelationRelationId, AccessShareLock);
+
+ ScanKeyInit(&key[0],
+ Anum_pg_class_relkind,
+ BTEqualStrategyNumber, F_CHAREQ,
+ CharGetDatum(RELKIND_SEQUENCE));
+
+ scan = table_beginscan_catalog(classRel, 1, key);
+
+ while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ {
+ Form_pg_class relForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relid = relForm->oid;
+
+ if (is_publishable_class(relid, relForm))
+ result = lappend_oid(result, relid);
+ }
+
+ table_endscan(scan);
+
+ table_close(classRel, AccessShareLock);
+ return result;
+}
+
/*
* Get publication using oid
*
@@ -1001,10 +1199,12 @@ GetPublication(Oid pubid)
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
+ pub->allsequences = pubform->puballsequences;
pub->pubactions.pubinsert = pubform->pubinsert;
pub->pubactions.pubupdate = pubform->pubupdate;
pub->pubactions.pubdelete = pubform->pubdelete;
pub->pubactions.pubtruncate = pubform->pubtruncate;
+ pub->pubactions.pubsequence = pubform->pubsequence;
pub->pubviaroot = pubform->pubviaroot;
ReleaseSysCache(tup);
@@ -1115,10 +1315,12 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
*schemarelids;
relids = GetPublicationRelations(publication->oid,
+ PUB_OBJTYPE_TABLE,
publication->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
schemarelids = GetAllSchemaPublicationRelations(publication->oid,
+ PUB_OBJTYPE_TABLE,
publication->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
@@ -1154,3 +1356,71 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
SRF_RETURN_DONE(funcctx);
}
+
+/*
+ * Returns Oids of sequences in a publication.
+ */
+Datum
+pg_get_publication_sequences(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ char *pubname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+ Publication *publication;
+ List *sequences;
+
+ /* stuff done only on the first call of the function */
+ if (SRF_IS_FIRSTCALL())
+ {
+ MemoryContext oldcontext;
+
+ /* create a function context for cross-call persistence */
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* switch to memory context appropriate for multiple function calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ publication = GetPublicationByName(pubname, false);
+
+ /*
+ * Publications support partitioned tables, although all changes are
+ * replicated using leaf partition identity and schema, so we only
+ * need those.
+ */
+ if (publication->allsequences)
+ sequences = GetAllSequencesPublicationRelations();
+ else
+ {
+ List *relids,
+ *schemarelids;
+
+ relids = GetPublicationRelations(publication->oid,
+ PUB_OBJTYPE_SEQUENCE,
+ publication->pubviaroot ?
+ PUBLICATION_PART_ROOT :
+ PUBLICATION_PART_LEAF);
+ schemarelids = GetAllSchemaPublicationRelations(publication->oid,
+ PUB_OBJTYPE_SEQUENCE,
+ publication->pubviaroot ?
+ PUBLICATION_PART_ROOT :
+ PUBLICATION_PART_LEAF);
+ sequences = list_concat_unique_oid(relids, schemarelids);
+ }
+
+ funcctx->user_fctx = (void *) sequences;
+
+ MemoryContextSwitchTo(oldcontext);
+ }
+
+ /* stuff done on every call of the function */
+ funcctx = SRF_PERCALL_SETUP();
+ sequences = (List *) funcctx->user_fctx;
+
+ if (funcctx->call_cntr < list_length(sequences))
+ {
+ Oid relid = list_nth_oid(sequences, funcctx->call_cntr);
+
+ SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+ }
+
+ SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 0fc614e32c0..b1a6df16ad3 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -374,6 +374,16 @@ CREATE VIEW pg_publication_tables AS
pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE C.oid = GPT.relid;
+CREATE VIEW pg_publication_sequences AS
+ SELECT
+ P.pubname AS pubname,
+ N.nspname AS schemaname,
+ C.relname AS sequencename
+ FROM pg_publication P,
+ LATERAL pg_get_publication_sequences(P.pubname) GPS,
+ pg_class C JOIN pg_namespace N ON (N.oid = C.relnamespace)
+ WHERE C.oid = GPS.relid;
+
CREATE VIEW pg_locks AS
SELECT * FROM pg_lock_status() AS L;
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 4fd1e6e7abb..84e37df783b 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -16,6 +16,7 @@
#include "access/genam.h"
#include "access/htup_details.h"
+#include "access/relation.h"
#include "access/table.h"
#include "access/xact.h"
#include "catalog/catalog.h"
@@ -67,15 +68,17 @@ typedef struct rf_context
} rf_context;
static List *OpenRelIdList(List *relids);
-static List *OpenTableList(List *tables);
-static void CloseTableList(List *rels);
+static List *OpenRelationList(List *rels, char objectType);
+static void CloseRelationList(List *rels);
static void LockSchemaList(List *schemalist);
-static void PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
+static void PublicationAddRelations(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt);
-static void PublicationDropTables(Oid pubid, List *rels, bool missing_ok);
-static void PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
- AlterPublicationStmt *stmt);
-static void PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok);
+static void PublicationDropRelations(Oid pubid, List *rels, bool missing_ok);
+static void PublicationAddSchemas(Oid pubid, List *schemas, char objectType,
+ bool if_not_exists, AlterPublicationStmt *stmt);
+static void PublicationDropSchemas(Oid pubid, List *schemas, char objectType,
+ bool missing_ok);
+
static void
parse_publication_options(ParseState *pstate,
@@ -95,6 +98,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = true;
pubactions->pubdelete = true;
pubactions->pubtruncate = true;
+ pubactions->pubsequence = true;
*publish_via_partition_root = false;
/* Parse options */
@@ -119,6 +123,7 @@ parse_publication_options(ParseState *pstate,
pubactions->pubupdate = false;
pubactions->pubdelete = false;
pubactions->pubtruncate = false;
+ pubactions->pubsequence = false;
*publish_given = true;
publish = defGetString(defel);
@@ -141,6 +146,8 @@ parse_publication_options(ParseState *pstate,
pubactions->pubdelete = true;
else if (strcmp(publish_opt, "truncate") == 0)
pubactions->pubtruncate = true;
+ else if (strcmp(publish_opt, "sequence") == 0)
+ pubactions->pubsequence = true;
else
ereport(ERROR,
(errcode(ERRCODE_SYNTAX_ERROR),
@@ -167,7 +174,8 @@ parse_publication_options(ParseState *pstate,
*/
static void
ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
- List **rels, List **schemas)
+ List **tables, List **sequences,
+ List **tables_schemas, List **sequences_schemas)
{
ListCell *cell;
PublicationObjSpec *pubobj;
@@ -185,13 +193,22 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
switch (pubobj->pubobjtype)
{
case PUBLICATIONOBJ_TABLE:
- *rels = lappend(*rels, pubobj->pubtable);
+ *tables = lappend(*tables, pubobj->pubtable);
+ break;
+ case PUBLICATIONOBJ_SEQUENCE:
+ *sequences = lappend(*sequences, pubobj->pubtable);
break;
case PUBLICATIONOBJ_TABLES_IN_SCHEMA:
schemaid = get_namespace_oid(pubobj->name, false);
/* Filter out duplicates if user specifies "sch1, sch1" */
- *schemas = list_append_unique_oid(*schemas, schemaid);
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA:
+ schemaid = get_namespace_oid(pubobj->name, false);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
break;
case PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA:
search_path = fetch_search_path(false);
@@ -204,7 +221,20 @@ ObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,
list_free(search_path);
/* Filter out duplicates if user specifies "sch1, sch1" */
- *schemas = list_append_unique_oid(*schemas, schemaid);
+ *tables_schemas = list_append_unique_oid(*tables_schemas, schemaid);
+ break;
+ case PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA:
+ search_path = fetch_search_path(false);
+ if (search_path == NIL) /* nothing valid in search_path? */
+ ereport(ERROR,
+ errcode(ERRCODE_UNDEFINED_SCHEMA),
+ errmsg("no schema has been selected for CURRENT_SCHEMA"));
+
+ schemaid = linitial_oid(search_path);
+ list_free(search_path);
+
+ /* Filter out duplicates if user specifies "sch1, sch1" */
+ *sequences_schemas = list_append_unique_oid(*sequences_schemas, schemaid);
break;
default:
/* shouldn't happen */
@@ -240,6 +270,14 @@ CheckObjSchemaNotAlreadyInPublication(List *rels, List *schemaidlist,
errdetail("Table \"%s\" in schema \"%s\" is already part of the publication, adding the same schema is not supported.",
RelationGetRelationName(rel),
get_namespace_name(relSchemaId)));
+ else if (checkobjtype == PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA)
+ ereport(ERROR,
+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("cannot add schema \"%s\" to publication",
+ get_namespace_name(relSchemaId)),
+ errdetail("Sequence \"%s\" in schema \"%s\" is already part of the publication, adding the same schema is not supported.",
+ RelationGetRelationName(rel),
+ get_namespace_name(relSchemaId)));
else if (checkobjtype == PUBLICATIONOBJ_TABLE)
ereport(ERROR,
errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -248,6 +286,14 @@ CheckObjSchemaNotAlreadyInPublication(List *rels, List *schemaidlist,
RelationGetRelationName(rel)),
errdetail("Table's schema \"%s\" is already part of the publication or part of the specified schema list.",
get_namespace_name(relSchemaId)));
+ else if (checkobjtype == PUBLICATIONOBJ_SEQUENCE)
+ ereport(ERROR,
+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("cannot add relation \"%s.%s\" to publication",
+ get_namespace_name(relSchemaId),
+ RelationGetRelationName(rel)),
+ errdetail("Sequence's schema \"%s\" is already part of the publication or part of the specified schema list.",
+ get_namespace_name(relSchemaId)));
}
}
}
@@ -753,6 +799,7 @@ CheckPubRelationColumnList(List *tables, const char *queryString,
ObjectAddress
CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
{
+ ListCell *lc;
Relation rel;
ObjectAddress myself;
Oid puboid;
@@ -764,8 +811,23 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
bool publish_via_partition_root_given;
bool publish_via_partition_root;
AclResult aclresult;
- List *relations = NIL;
- List *schemaidlist = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
+
+ bool for_all_tables = false;
+ bool for_all_sequences = false;
+
+ /* Translate the list of object types (represented by strings) to bool flags. */
+ foreach (lc, stmt->for_all_objects)
+ {
+ char *val = strVal(lfirst(lc));
+ if (strcmp(val, "tables") == 0)
+ for_all_tables = true;
+ else if (strcmp(val, "sequences") == 0)
+ for_all_sequences = true;
+ }
/* must have CREATE privilege on database */
aclresult = pg_database_aclcheck(MyDatabaseId, GetUserId(), ACL_CREATE);
@@ -774,11 +836,17 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
get_database_name(MyDatabaseId));
/* FOR ALL TABLES requires superuser */
- if (stmt->for_all_tables && !superuser())
+ if (for_all_tables && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES publication")));
+ /* FOR ALL SEQUENCES requires superuser */
+ if (for_all_sequences && !superuser())
+ ereport(ERROR,
+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("must be superuser to create FOR ALL SEQUENCES publication")));
+
rel = table_open(PublicationRelationId, RowExclusiveLock);
/* Check if name is used */
@@ -810,7 +878,9 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
Anum_pg_publication_oid);
values[Anum_pg_publication_oid - 1] = ObjectIdGetDatum(puboid);
values[Anum_pg_publication_puballtables - 1] =
- BoolGetDatum(stmt->for_all_tables);
+ BoolGetDatum(for_all_tables);
+ values[Anum_pg_publication_puballsequences - 1] =
+ BoolGetDatum(for_all_sequences);
values[Anum_pg_publication_pubinsert - 1] =
BoolGetDatum(pubactions.pubinsert);
values[Anum_pg_publication_pubupdate - 1] =
@@ -819,6 +889,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
BoolGetDatum(pubactions.pubdelete);
values[Anum_pg_publication_pubtruncate - 1] =
BoolGetDatum(pubactions.pubtruncate);
+ values[Anum_pg_publication_pubsequence - 1] =
+ BoolGetDatum(pubactions.pubsequence);
values[Anum_pg_publication_pubviaroot - 1] =
BoolGetDatum(publish_via_partition_root);
@@ -836,28 +908,42 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
CommandCounterIncrement();
/* Associate objects with the publication. */
- if (stmt->for_all_tables)
+ if (for_all_tables || for_all_sequences)
{
/* Invalidate relcache so that publication info is rebuilt. */
CacheInvalidateRelcacheAll();
}
- else
+
+ /*
+ * If the publication might have either tables or sequences (directly or
+ * through a schema), process that.
+ */
+ if (!for_all_tables || !for_all_sequences)
{
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
- &schemaidlist);
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist);
/* FOR ALL TABLES IN SCHEMA requires superuser */
- if (list_length(schemaidlist) > 0 && !superuser())
+ if (list_length(tables_schemaidlist) > 0 && !superuser())
ereport(ERROR,
errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to create FOR ALL TABLES IN SCHEMA publication"));
- if (list_length(relations) > 0)
+ /* FOR ALL SEQUENCES IN SCHEMA requires superuser */
+ if (list_length(sequences_schemaidlist) > 0 && !superuser())
+ ereport(ERROR,
+ errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("must be superuser to create FOR ALL SEQUENCES IN SCHEMA publication"));
+
+ /* tables added directly */
+ if (list_length(tables) > 0)
{
List *rels;
- rels = OpenTableList(relations);
- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ rels = OpenRelationList(tables, PUB_OBJTYPE_TABLE);
+ CheckObjSchemaNotAlreadyInPublication(rels, tables_schemaidlist,
PUBLICATIONOBJ_TABLE);
TransformPubWhereClauses(rels, pstate->p_sourcetext,
@@ -866,18 +952,46 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)
CheckPubRelationColumnList(rels, pstate->p_sourcetext,
publish_via_partition_root);
- PublicationAddTables(puboid, rels, true, NULL);
- CloseTableList(rels);
+ PublicationAddRelations(puboid, rels, true, NULL);
+ CloseRelationList(rels);
}
- if (list_length(schemaidlist) > 0)
+ /* sequences added directly */
+ if (list_length(sequences) > 0)
+ {
+ List *rels;
+
+ rels = OpenRelationList(sequences, PUB_OBJTYPE_SEQUENCE);
+ CheckObjSchemaNotAlreadyInPublication(rels, sequences_schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddRelations(puboid, rels, true, NULL);
+ CloseRelationList(rels);
+ }
+
+ /* tables added through a schema */
+ if (list_length(tables_schemaidlist) > 0)
{
/*
* Schema lock is held until the publication is created to prevent
* concurrent schema deletion.
*/
- LockSchemaList(schemaidlist);
- PublicationAddSchemas(puboid, schemaidlist, true, NULL);
+ LockSchemaList(tables_schemaidlist);
+ PublicationAddSchemas(puboid,
+ tables_schemaidlist, PUB_OBJTYPE_TABLE,
+ true, NULL);
+ }
+
+ /* sequences added through a schema */
+ if (list_length(sequences_schemaidlist) > 0)
+ {
+ /*
+ * Schema lock is held until the publication is created to prevent
+ * concurrent schema deletion.
+ */
+ LockSchemaList(sequences_schemaidlist);
+ PublicationAddSchemas(puboid,
+ sequences_schemaidlist, PUB_OBJTYPE_SEQUENCE,
+ true, NULL);
}
}
@@ -941,6 +1055,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
AccessShareLock);
root_relids = GetPublicationRelations(pubform->oid,
+ PUB_OBJTYPE_TABLE,
PUBLICATION_PART_ROOT);
foreach(lc, root_relids)
@@ -1020,6 +1135,9 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
values[Anum_pg_publication_pubtruncate - 1] = BoolGetDatum(pubactions.pubtruncate);
replaces[Anum_pg_publication_pubtruncate - 1] = true;
+
+ values[Anum_pg_publication_pubsequence - 1] = BoolGetDatum(pubactions.pubsequence);
+ replaces[Anum_pg_publication_pubsequence - 1] = true;
}
if (publish_via_partition_root_given)
@@ -1039,7 +1157,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
pubform = (Form_pg_publication) GETSTRUCT(tup);
/* Invalidate the relcache. */
- if (pubform->puballtables)
+ if (pubform->puballtables || pubform->puballsequences)
{
CacheInvalidateRelcacheAll();
}
@@ -1055,6 +1173,7 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
*/
if (root_relids == NIL)
relids = GetPublicationRelations(pubform->oid,
+ PUB_OBJTYPE_TABLE,
PUBLICATION_PART_ALL);
else
{
@@ -1068,7 +1187,20 @@ AlterPublicationOptions(ParseState *pstate, AlterPublicationStmt *stmt,
lfirst_oid(lc));
}
+ /* tables */
+ schemarelids = GetAllSchemaPublicationRelations(pubform->oid,
+ PUB_OBJTYPE_TABLE,
+ PUBLICATION_PART_ALL);
+ relids = list_concat_unique_oid(relids, schemarelids);
+
+ /* sequences */
+ relids = list_concat_unique_oid(relids,
+ GetPublicationRelations(pubform->oid,
+ PUB_OBJTYPE_SEQUENCE,
+ PUBLICATION_PART_ALL));
+
schemarelids = GetAllSchemaPublicationRelations(pubform->oid,
+ PUB_OBJTYPE_SEQUENCE,
PUBLICATION_PART_ALL);
relids = list_concat_unique_oid(relids, schemarelids);
@@ -1123,7 +1255,7 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
if (!tables && stmt->action != AP_SetObjects)
return;
- rels = OpenTableList(tables);
+ rels = OpenRelationList(tables, PUB_OBJTYPE_TABLE);
if (stmt->action == AP_AddObjects)
{
@@ -1133,7 +1265,9 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
* Check if the relation is member of the existing schema in the
* publication or member of the schema list specified.
*/
- schemas = list_concat_copy(schemaidlist, GetPublicationSchemas(pubid));
+ schemas = list_concat_copy(schemaidlist,
+ GetPublicationSchemas(pubid,
+ PUB_OBJTYPE_TABLE));
CheckObjSchemaNotAlreadyInPublication(rels, schemas,
PUBLICATIONOBJ_TABLE);
@@ -1141,13 +1275,14 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
CheckPubRelationColumnList(rels, queryString, pubform->pubviaroot);
- PublicationAddTables(pubid, rels, false, stmt);
+ PublicationAddRelations(pubid, rels, false, stmt);
}
else if (stmt->action == AP_DropObjects)
- PublicationDropTables(pubid, rels, false);
+ PublicationDropRelations(pubid, rels, false);
else /* AP_SetObjects */
{
List *oldrelids = GetPublicationRelations(pubid,
+ PUB_OBJTYPE_TABLE,
PUBLICATION_PART_ROOT);
List *delrels = NIL;
ListCell *oldlc;
@@ -1266,18 +1401,18 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
}
/* And drop them. */
- PublicationDropTables(pubid, delrels, true);
+ PublicationDropRelations(pubid, delrels, true);
/*
* Don't bother calculating the difference for adding, we'll catch and
* skip existing ones when doing catalog update.
*/
- PublicationAddTables(pubid, rels, true, stmt);
+ PublicationAddRelations(pubid, rels, true, stmt);
- CloseTableList(delrels);
+ CloseRelationList(delrels);
}
- CloseTableList(rels);
+ CloseRelationList(rels);
}
/*
@@ -1287,7 +1422,8 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
*/
static void
AlterPublicationSchemas(AlterPublicationStmt *stmt,
- HeapTuple tup, List *schemaidlist)
+ HeapTuple tup, List *schemaidlist,
+ char objectType)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
@@ -1309,20 +1445,20 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
List *rels;
List *reloids;
- reloids = GetPublicationRelations(pubform->oid, PUBLICATION_PART_ROOT);
+ reloids = GetPublicationRelations(pubform->oid, objectType, PUBLICATION_PART_ROOT);
rels = OpenRelIdList(reloids);
CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
PUBLICATIONOBJ_TABLES_IN_SCHEMA);
- CloseTableList(rels);
- PublicationAddSchemas(pubform->oid, schemaidlist, false, stmt);
+ CloseRelationList(rels);
+ PublicationAddSchemas(pubform->oid, schemaidlist, objectType, false, stmt);
}
else if (stmt->action == AP_DropObjects)
- PublicationDropSchemas(pubform->oid, schemaidlist, false);
+ PublicationDropSchemas(pubform->oid, schemaidlist, objectType, false);
else /* AP_SetObjects */
{
- List *oldschemaids = GetPublicationSchemas(pubform->oid);
+ List *oldschemaids = GetPublicationSchemas(pubform->oid, objectType);
List *delschemas = NIL;
/* Identify which schemas should be dropped */
@@ -1335,13 +1471,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
LockSchemaList(delschemas);
/* And drop them */
- PublicationDropSchemas(pubform->oid, delschemas, true);
+ PublicationDropSchemas(pubform->oid, delschemas, objectType, true);
/*
* Don't bother calculating the difference for adding, we'll catch and
* skip existing ones when doing catalog update.
*/
- PublicationAddSchemas(pubform->oid, schemaidlist, true, stmt);
+ PublicationAddSchemas(pubform->oid, schemaidlist, objectType, true, stmt);
}
}
@@ -1351,12 +1487,13 @@ AlterPublicationSchemas(AlterPublicationStmt *stmt,
*/
static void
CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
- List *tables, List *schemaidlist)
+ List *tables, List *tables_schemaidlist,
+ List *sequences, List *sequences_schemaidlist)
{
Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
if ((stmt->action == AP_AddObjects || stmt->action == AP_SetObjects) &&
- schemaidlist && !superuser())
+ (tables_schemaidlist || sequences_schemaidlist) && !superuser())
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
errmsg("must be superuser to add or set schemas")));
@@ -1365,13 +1502,24 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
* Check that user is allowed to manipulate the publication tables in
* schema
*/
- if (schemaidlist && pubform->puballtables)
+ if (tables_schemaidlist && pubform->puballtables)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables from schema cannot be added to, dropped from, or set on FOR ALL TABLES publications.")));
+ /*
+ * Check that user is allowed to manipulate the publication sequences in
+ * schema
+ */
+ if (sequences_schemaidlist && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.")));
+
/* Check that user is allowed to manipulate the publication tables. */
if (tables && pubform->puballtables)
ereport(ERROR,
@@ -1379,6 +1527,108 @@ CheckAlterPublication(AlterPublicationStmt *stmt, HeapTuple tup,
errmsg("publication \"%s\" is defined as FOR ALL TABLES",
NameStr(pubform->pubname)),
errdetail("Tables cannot be added to or dropped from FOR ALL TABLES publications.")));
+
+ /* Check that user is allowed to manipulate the publication sequences. */
+ if (sequences && pubform->puballsequences)
+ ereport(ERROR,
+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("publication \"%s\" is defined as FOR ALL SEQUENCES",
+ NameStr(pubform->pubname)),
+ errdetail("Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.")));
+}
+
+/*
+ * Add or remove sequence to/from publication.
+ */
+static void
+AlterPublicationSequences(AlterPublicationStmt *stmt, HeapTuple tup,
+ List *sequences, List *schemaidlist)
+{
+ List *rels = NIL;
+ Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);
+ Oid pubid = pubform->oid;
+
+ /*
+ * It is quite possible that for the SET case user has not specified any
+ * tables in which case we need to remove all the existing tables.
+ */
+ if (!sequences && stmt->action != AP_SetObjects)
+ return;
+
+ rels = OpenRelationList(sequences, PUB_OBJTYPE_SEQUENCE);
+
+ if (stmt->action == AP_AddObjects)
+ {
+ List *schemas = NIL;
+
+ /*
+ * Check if the relation is member of the existing schema in the
+ * publication or member of the schema list specified.
+ */
+ schemas = list_concat_copy(schemaidlist,
+ GetPublicationSchemas(pubid,
+ PUB_OBJTYPE_SEQUENCE));
+ CheckObjSchemaNotAlreadyInPublication(rels, schemas,
+ PUBLICATIONOBJ_SEQUENCE);
+ PublicationAddRelations(pubid, rels, false, stmt);
+ }
+ else if (stmt->action == AP_DropObjects)
+ PublicationDropRelations(pubid, rels, false);
+ else /* DEFELEM_SET */
+ {
+ List *oldrelids = GetPublicationRelations(pubid,
+ PUB_OBJTYPE_SEQUENCE,
+ PUBLICATION_PART_ROOT);
+ List *delrels = NIL;
+ ListCell *oldlc;
+
+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,
+ PUBLICATIONOBJ_SEQUENCE);
+
+ /* Calculate which relations to drop. */
+ foreach(oldlc, oldrelids)
+ {
+ Oid oldrelid = lfirst_oid(oldlc);
+ ListCell *newlc;
+ PublicationRelInfo *oldrel;
+ bool found = false;
+
+ foreach(newlc, rels)
+ {
+ PublicationRelInfo *newpubrel;
+
+ newpubrel = (PublicationRelInfo *) lfirst(newlc);
+ if (RelationGetRelid(newpubrel->relation) == oldrelid)
+ {
+ found = true;
+ break;
+ }
+ }
+ /* Not yet in the list, open it and add to the list */
+ if (!found)
+ {
+ oldrel = palloc(sizeof(PublicationRelInfo));
+ oldrel->whereClause = NULL;
+ oldrel->columns = NULL;
+ oldrel->relation = table_open(oldrelid,
+ ShareUpdateExclusiveLock);
+ delrels = lappend(delrels, oldrel);
+ }
+ }
+
+ /* And drop them. */
+ PublicationDropRelations(pubid, delrels, true);
+
+ /*
+ * Don't bother calculating the difference for adding, we'll catch and
+ * skip existing ones when doing catalog update.
+ */
+ PublicationAddRelations(pubid, rels, true, stmt);
+
+ CloseRelationList(delrels);
+ }
+
+ CloseRelationList(rels);
}
/*
@@ -1416,14 +1666,20 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
AlterPublicationOptions(pstate, stmt, rel, tup);
else
{
- List *relations = NIL;
- List *schemaidlist = NIL;
+ List *tables = NIL;
+ List *sequences = NIL;
+ List *tables_schemaidlist = NIL;
+ List *sequences_schemaidlist = NIL;
Oid pubid = pubform->oid;
- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,
- &schemaidlist);
+ ObjectsInPublicationToOids(stmt->pubobjects, pstate,
+ &tables, &sequences,
+ &tables_schemaidlist,
+ &sequences_schemaidlist);
- CheckAlterPublication(stmt, tup, relations, schemaidlist);
+ CheckAlterPublication(stmt, tup,
+ tables, tables_schemaidlist,
+ sequences, sequences_schemaidlist);
heap_freetuple(tup);
@@ -1451,9 +1707,16 @@ AlterPublication(ParseState *pstate, AlterPublicationStmt *stmt)
errmsg("publication \"%s\" does not exist",
stmt->pubname));
- AlterPublicationTables(stmt, tup, relations, schemaidlist,
+ AlterPublicationTables(stmt, tup, tables, tables_schemaidlist,
pstate->p_sourcetext);
- AlterPublicationSchemas(stmt, tup, schemaidlist);
+
+ AlterPublicationSequences(stmt, tup, sequences, sequences_schemaidlist);
+
+ AlterPublicationSchemas(stmt, tup, tables_schemaidlist,
+ PUB_OBJTYPE_TABLE);
+
+ AlterPublicationSchemas(stmt, tup, sequences_schemaidlist,
+ PUB_OBJTYPE_SEQUENCE);
}
/* Cleanup. */
@@ -1521,7 +1784,7 @@ RemovePublicationById(Oid pubid)
pubform = (Form_pg_publication) GETSTRUCT(tup);
/* Invalidate relcache so that publication info is rebuilt. */
- if (pubform->puballtables)
+ if (pubform->puballtables || pubform->puballsequences)
CacheInvalidateRelcacheAll();
CatalogTupleDelete(rel, &tup->t_self);
@@ -1557,6 +1820,7 @@ RemovePublicationSchemaById(Oid psoid)
* partitions.
*/
schemaRels = GetSchemaPublicationRelations(pubsch->pnnspid,
+ pubsch->pntype,
PUBLICATION_PART_ALL);
InvalidatePublicationRels(schemaRels);
@@ -1599,10 +1863,10 @@ OpenRelIdList(List *relids)
* add them to a publication.
*/
static List *
-OpenTableList(List *tables)
+OpenRelationList(List *rels, char objectType)
{
List *relids = NIL;
- List *rels = NIL;
+ List *result = NIL;
ListCell *lc;
List *relids_with_rf = NIL;
List *relids_with_collist = NIL;
@@ -1610,19 +1874,35 @@ OpenTableList(List *tables)
/*
* Open, share-lock, and check all the explicitly-specified relations
*/
- foreach(lc, tables)
+ foreach(lc, rels)
{
PublicationTable *t = lfirst_node(PublicationTable, lc);
bool recurse = t->relation->inh;
Relation rel;
Oid myrelid;
PublicationRelInfo *pub_rel;
+ char myrelkind;
/* Allow query cancel in case this takes a long time */
CHECK_FOR_INTERRUPTS();
rel = table_openrv(t->relation, ShareUpdateExclusiveLock);
myrelid = RelationGetRelid(rel);
+ myrelkind = get_rel_relkind(myrelid);
+
+ /*
+ * Make sure the relkind matches the expected object type. This may
+ * happen e.g. when adding a sequence using ADD TABLE or a table
+ * using ADD SEQUENCE).
+ *
+ * XXX We let through unsupported object types (views etc.). Those
+ * will be caught later in check_publication_add_relation.
+ */
+ if (pub_get_object_type_for_relkind(myrelkind) != PUB_OBJTYPE_UNSUPPORTED &&
+ pub_get_object_type_for_relkind(myrelkind) != objectType)
+ ereport(ERROR,
+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("object type does not match type expected by command"));
/*
* Filter out duplicates if user specifies "foo, foo".
@@ -1655,7 +1935,7 @@ OpenTableList(List *tables)
pub_rel->relation = rel;
pub_rel->whereClause = t->whereClause;
pub_rel->columns = t->columns;
- rels = lappend(rels, pub_rel);
+ result = lappend(result, pub_rel);
relids = lappend_oid(relids, myrelid);
if (t->whereClause)
@@ -1724,10 +2004,9 @@ OpenTableList(List *tables)
pub_rel->relation = rel;
/* child inherits WHERE clause from parent */
pub_rel->whereClause = t->whereClause;
-
/* child inherits column list from parent */
pub_rel->columns = t->columns;
- rels = lappend(rels, pub_rel);
+ result = lappend(result, pub_rel);
relids = lappend_oid(relids, childrelid);
if (t->whereClause)
@@ -1742,14 +2021,14 @@ OpenTableList(List *tables)
list_free(relids);
list_free(relids_with_rf);
- return rels;
+ return result;
}
/*
* Close all relations in the list.
*/
static void
-CloseTableList(List *rels)
+CloseRelationList(List *rels)
{
ListCell *lc;
@@ -1797,12 +2076,12 @@ LockSchemaList(List *schemalist)
* Add listed tables to the publication.
*/
static void
-PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
+PublicationAddRelations(Oid pubid, List *rels, bool if_not_exists,
AlterPublicationStmt *stmt)
{
ListCell *lc;
- Assert(!stmt || !stmt->for_all_tables);
+ Assert(!stmt || !stmt->for_all_objects);
foreach(lc, rels)
{
@@ -1831,7 +2110,7 @@ PublicationAddTables(Oid pubid, List *rels, bool if_not_exists,
* Remove listed tables from the publication.
*/
static void
-PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
+PublicationDropRelations(Oid pubid, List *rels, bool missing_ok)
{
ObjectAddress obj;
ListCell *lc;
@@ -1876,19 +2155,19 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
* Add listed schemas to the publication.
*/
static void
-PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
- AlterPublicationStmt *stmt)
+PublicationAddSchemas(Oid pubid, List *schemas, char objectType,
+ bool if_not_exists, AlterPublicationStmt *stmt)
{
ListCell *lc;
- Assert(!stmt || !stmt->for_all_tables);
+ Assert(!stmt || !stmt->for_all_objects);
foreach(lc, schemas)
{
Oid schemaid = lfirst_oid(lc);
ObjectAddress obj;
- obj = publication_add_schema(pubid, schemaid, if_not_exists);
+ obj = publication_add_schema(pubid, schemaid, objectType, if_not_exists);
if (stmt)
{
EventTriggerCollectSimpleCommand(obj, InvalidObjectAddress,
@@ -1904,7 +2183,7 @@ PublicationAddSchemas(Oid pubid, List *schemas, bool if_not_exists,
* Remove listed schemas from the publication.
*/
static void
-PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
+PublicationDropSchemas(Oid pubid, List *schemas, char objectType, bool missing_ok)
{
ObjectAddress obj;
ListCell *lc;
@@ -1914,10 +2193,11 @@ PublicationDropSchemas(Oid pubid, List *schemas, bool missing_ok)
{
Oid schemaid = lfirst_oid(lc);
- psid = GetSysCacheOid2(PUBLICATIONNAMESPACEMAP,
+ psid = GetSysCacheOid3(PUBLICATIONNAMESPACEMAP,
Anum_pg_publication_namespace_oid,
ObjectIdGetDatum(schemaid),
- ObjectIdGetDatum(pubid));
+ ObjectIdGetDatum(pubid),
+ CharGetDatum(objectType));
if (!OidIsValid(psid))
{
if (missing_ok)
@@ -1972,6 +2252,13 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
NameStr(form->pubname)),
errhint("The owner of a FOR ALL TABLES publication must be a superuser.")));
+ if (form->puballsequences && !superuser_arg(newOwnerId))
+ ereport(ERROR,
+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ errmsg("permission denied to change owner of publication \"%s\"",
+ NameStr(form->pubname)),
+ errhint("The owner of a FOR ALL SEQUENCES publication must be a superuser.")));
+
if (!superuser_arg(newOwnerId) && is_schema_publication(form->oid))
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index ddf219b21f5..297245bd78c 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -38,6 +38,7 @@
#include "miscadmin.h"
#include "nodes/makefuncs.h"
#include "parser/parse_type.h"
+#include "replication/message.h"
#include "storage/lmgr.h"
#include "storage/proc.h"
#include "storage/smgr.h"
@@ -76,6 +77,7 @@ typedef struct SeqTableData
{
Oid relid; /* pg_class OID of this sequence (hash key) */
Oid filenode; /* last seen relfilenode of this sequence */
+ Oid tablespace; /* last seen tablespace of this sequence */
LocalTransactionId lxid; /* xact in which we last did a seq op */
bool last_valid; /* do we have a valid "last" value? */
int64 last; /* value last returned by nextval */
@@ -83,6 +85,7 @@ typedef struct SeqTableData
/* if last != cached, we have not used up all the cached values */
int64 increment; /* copy of sequence's increment field */
/* note that increment is zero until we first do nextval_internal() */
+ bool need_log; /* should be written to WAL at commit? */
} SeqTableData;
typedef SeqTableData *SeqTable;
@@ -332,6 +335,69 @@ ResetSequence(Oid seq_relid)
relation_close(seq_rel, NoLock);
}
+/*
+ * Update the sequence state by creating a new relfilenode.
+ *
+ * This creates a new relfilenode, to allow transactional behavior.
+ */
+void
+SetSequence(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)
+{
+ SeqTable elm;
+ Relation seqrel;
+ Buffer buf;
+ HeapTupleData seqdatatuple;
+ Form_pg_sequence_data seq;
+ HeapTuple tuple;
+
+ /* open and lock sequence */
+ init_sequence(seq_relid, &elm, &seqrel);
+
+ /* lock page' buffer and read tuple */
+ seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+
+ /* Copy the existing sequence tuple. */
+ tuple = heap_copytuple(&seqdatatuple);
+
+ /* Now we're done with the old page */
+ UnlockReleaseBuffer(buf);
+
+ /*
+ * Modify the copied tuple to update the sequence state (similar to what
+ * ResetSequence does).
+ */
+ seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
+ seq->last_value = last_value;
+ seq->is_called = is_called;
+ seq->log_cnt = log_cnt;
+
+ /*
+ * Create a new storage file for the sequence - this is needed for the
+ * transactional behavior.
+ */
+ RelationSetNewRelfilenode(seqrel, seqrel->rd_rel->relpersistence);
+
+ /*
+ * Ensure sequence's relfrozenxid is at 0, since it won't contain any
+ * unfrozen XIDs. Same with relminmxid, since a sequence will never
+ * contain multixacts.
+ */
+ Assert(seqrel->rd_rel->relfrozenxid == InvalidTransactionId);
+ Assert(seqrel->rd_rel->relminmxid == InvalidMultiXactId);
+
+ /*
+ * Insert the modified tuple into the new storage file. This does all the
+ * necessary WAL-logging etc.
+ */
+ fill_seq_with_data(seqrel, tuple);
+
+ /* Clear local cache so that we don't think we have cached numbers */
+ /* Note that we do not change the currval() state */
+ elm->cached = elm->last;
+
+ relation_close(seqrel, NoLock);
+}
+
/*
* Initialize a sequence's relation with the specified tuple as content
*
@@ -396,10 +462,21 @@ fill_seq_fork_with_data(Relation rel, HeapTuple tuple, ForkNumber forkNum)
tuple->t_data->t_infomask |= HEAP_XMAX_INVALID;
ItemPointerSet(&tuple->t_data->t_ctid, 0, FirstOffsetNumber);
- /* check the comment above nextval_internal()'s equivalent call. */
+ /*
+ * Remember we need to write this sequence to WAL at commit, and make sure
+ * we have XID if we may need to decode this sequence.
+ *
+ * XXX Maybe we should not be doing this except when actually reading a
+ * value from the sequence?
+ */
if (RelationNeedsWAL(rel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
START_CRIT_SECTION();
MarkBufferDirty(buf);
@@ -644,6 +721,23 @@ nextval_internal(Oid relid, bool check_permissions)
/* open and lock sequence */
init_sequence(relid, &elm, &seqrel);
+ /*
+ * Remember we need to write this sequence to WAL at commit, and make sure
+ * we have XID if we may need to decode this sequence.
+ *
+ * XXX Maybe we should not be doing this except when actually reading a
+ * value from the sequence?
+ */
+ if (RelationNeedsWAL(seqrel))
+ {
+ elm->need_log = true;
+
+ GetTopTransactionId();
+
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
if (check_permissions &&
pg_class_aclcheck(elm->relid, GetUserId(),
ACL_USAGE | ACL_UPDATE) != ACLCHECK_OK)
@@ -798,10 +892,28 @@ nextval_internal(Oid relid, bool check_permissions)
* It's sufficient to ensure the toplevel transaction has an xid, no need
* to assign xids subxacts, that'll already trigger an appropriate wait.
* (Have to do that here, so we're outside the critical section)
+ *
+ * We have to ensure we have a proper XID, which will be included in
+ * the XLOG record by XLogRecordAssemble. Otherwise the first nextval()
+ * in a subxact (without any preceding changes) would get XID 0, and it
+ * would then be impossible to decide which top xact it belongs to.
+ * It'd also trigger assert in DecodeSequence. We only do that with
+ * wal_level=logical, though.
+ *
+ * XXX This might seem unnecessary, because if there's no XID the xact
+ * couldn't have done anything important yet, e.g. it could not have
+ * created a sequence. But that's incorrect, because of subxacts. The
+ * current subtransaction might not have done anything yet (thus no XID),
+ * but an earlier one might have created the sequence.
*/
if (logit && RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -955,6 +1067,23 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* open and lock sequence */
init_sequence(relid, &elm, &seqrel);
+ /*
+ * Remember we need to write this sequence to WAL at commit, and make sure
+ * we have XID if we may need to decode this sequence.
+ *
+ * XXX Maybe we should not be doing this except when actually reading a
+ * value from the sequence?
+ */
+ if (RelationNeedsWAL(seqrel))
+ {
+ elm->need_log = true;
+
+ GetTopTransactionId();
+
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
if (pg_class_aclcheck(elm->relid, GetUserId(), ACL_UPDATE) != ACLCHECK_OK)
ereport(ERROR,
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
@@ -1002,8 +1131,13 @@ do_setval(Oid relid, int64 next, bool iscalled)
/* check the comment above nextval_internal()'s equivalent call. */
if (RelationNeedsWAL(seqrel))
+ {
GetTopTransactionId();
+ if (XLogLogicalInfoActive())
+ GetCurrentTransactionId();
+ }
+
/* ready to change the on-disk (or really, in-buffer) tuple */
START_CRIT_SECTION();
@@ -1172,6 +1306,7 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
if (seqrel->rd_rel->relfilenode != elm->filenode)
{
elm->filenode = seqrel->rd_rel->relfilenode;
+ elm->tablespace = seqrel->rd_rel->reltablespace;
elm->cached = elm->last;
}
@@ -1907,3 +2042,121 @@ seq_mask(char *page, BlockNumber blkno)
mask_unused_space(page);
}
+
+
+static void
+read_sequence_info(Relation seqrel, int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ Buffer buf;
+ Form_pg_sequence_data seq;
+ HeapTupleData seqdatatuple;
+
+ seq = read_seq_tuple(seqrel, &buf, &seqdatatuple);
+
+ *last_value = seq->last_value;
+ *is_called = seq->is_called;
+ *log_cnt = seq->log_cnt;
+
+ UnlockReleaseBuffer(buf);
+}
+
+/* XXX Do this only for wal_level = logical, probably? */
+void
+AtEOXact_Sequences(bool isCommit)
+{
+ SeqTable entry;
+ HASH_SEQ_STATUS scan;
+
+ if (!seqhashtab)
+ return;
+
+ /* only do this with wal_level=logical */
+ if (!XLogLogicalInfoActive())
+ return;
+
+ /*
+ * XXX Maybe we could enforce having XID here? Or is it too late?
+ */
+ // Assert(GetTopTransactionId() != InvalidTransactionId);
+ // Assert(GetCurrentTransactionId() != InvalidTransactionId);
+
+ hash_seq_init(&scan, seqhashtab);
+
+ while ((entry = (SeqTable) hash_seq_search(&scan)))
+ {
+ Relation rel;
+ RelFileNode rnode;
+ xl_logical_sequence xlrec;
+
+ /* not commit, we don't guarantee any data to be durable */
+ if (!isCommit)
+ entry->need_log = false;
+
+ /*
+ * If not touched in the current transaction, don't log anything.
+ * We leave needs_log set, so that if future transactions touch
+ * the sequence we'll log it properly.
+ */
+ if (!entry->need_log)
+ continue;
+
+ /* if this is commit, we'll log the */
+ entry->need_log = false;
+
+ /*
+ * does the relation still exist?
+ *
+ * XXX We need to make sure another transaction can't jump ahead of
+ * this one, otherwise the ordering of sequence changes could change.
+ * Imagine T1 and T2, where T1 writes sequence state first, but then
+ * T2 does it too and commits first:
+ *
+ * T1: log sequence state
+ * T2: increment sequence
+ * T2: log sequence state
+ * T2: commit
+ * T1: commit
+ *
+ * If we apply the sequences in this order, it'd be broken as the
+ * value might go backwars. ShareUpdateExclusive protects against
+ * that, but it's also restricting commit throughtput, probably.
+ *
+ * XXX This might be an issue for deadlocks, too, if two xacts try
+ * to write sequences in different ordering. We may need to sort
+ * the OIDs first, to enforce the same lock ordering.
+ */
+ rel = try_relation_open(entry->relid, ShareUpdateExclusiveLock);
+
+ /* XXX relation might have been dropped, maybe we should have logged
+ * the last change? */
+ if (!rel)
+ continue;
+
+ /* tablespace */
+ if (OidIsValid(entry->tablespace))
+ rnode.spcNode = entry->tablespace;
+ else
+ rnode.spcNode = MyDatabaseTableSpace;
+
+ rnode.dbNode = MyDatabaseId; /* database */
+ rnode.relNode = entry->filenode; /* relation */
+
+ xlrec.node = rnode;
+ xlrec.reloid = entry->relid;
+
+ /* XXX is it good enough to log values we have in cache? seems
+ * wrong and we may need to re-read that. */
+ read_sequence_info(rel, &xlrec.last, &xlrec.log_cnt, &xlrec.is_called);
+
+ XLogBeginInsert();
+ XLogRegisterData((char *) &xlrec, SizeOfLogicalSequence);
+
+ /* allow origin filtering */
+ XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
+
+ (void) XLogInsert(RM_LOGICALMSG_ID, XLOG_LOGICAL_SEQUENCE);
+
+ /* hold the sequence lock until the end of the transaction */
+ relation_close(rel, NoLock);
+ }
+}
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 2e8d8afead8..057ab4b6a3f 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -90,6 +90,7 @@ typedef struct SubOpts
} SubOpts;
static List *fetch_table_list(WalReceiverConn *wrconn, List *publications);
+static List *fetch_sequence_list(WalReceiverConn *wrconn, List *publications);
static void check_duplicates_in_publist(List *publist, Datum *datums);
static List *merge_publications(List *oldpublist, List *newpublist, bool addpub, const char *subname);
static void ReportSlotConnectionError(List *rstates, Oid subid, char *slotname, char *err);
@@ -638,9 +639,9 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
{
char *err;
WalReceiverConn *wrconn;
- List *tables;
+ List *relations;
ListCell *lc;
- char table_state;
+ char sync_state;
/* Try to connect to the publisher. */
wrconn = walrcv_connect(conninfo, true, stmt->subname, &err);
@@ -657,14 +658,17 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
* Set sync state based on if we were asked to do data copy or
* not.
*/
- table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
+ sync_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;
/*
- * Get the table list from publisher and build local table status
- * info.
+ * Get the table and sequence list from publisher and build
+ * local relation sync status info.
*/
- tables = fetch_table_list(wrconn, publications);
- foreach(lc, tables)
+ relations = fetch_table_list(wrconn, publications);
+ relations = list_concat(relations,
+ fetch_sequence_list(wrconn, publications));
+
+ foreach(lc, relations)
{
RangeVar *rv = (RangeVar *) lfirst(lc);
Oid relid;
@@ -675,7 +679,7 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
CheckSubscriptionRelkind(get_rel_relkind(relid),
rv->schemaname, rv->relname);
- AddSubscriptionRelState(subid, relid, table_state,
+ AddSubscriptionRelState(subid, relid, sync_state,
InvalidXLogRecPtr);
}
@@ -701,12 +705,12 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
*
* Note that if tables were specified but copy_data is false
* then it is safe to enable two_phase up-front because those
- * tables are already initially in READY state. When the
- * subscription has no tables, we leave the twophase state as
- * PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
+ * relations are already initially in READY state. When the
+ * subscription has no relations, we leave the twophase state
+ * as PENDING, to allow ALTER SUBSCRIPTION ... REFRESH
* PUBLICATION to work.
*/
- if (opts.twophase && !opts.copy_data && tables != NIL)
+ if (opts.twophase && !opts.copy_data && relations != NIL)
twophase_enabled = true;
walrcv_create_slot(wrconn, opts.slot_name, false, twophase_enabled,
@@ -782,8 +786,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
if (validate_publications)
check_publications(wrconn, validate_publications);
- /* Get the table list from publisher. */
+ /* Get the list of relations from publisher. */
pubrel_names = fetch_table_list(wrconn, sub->publications);
+ pubrel_names = list_concat(pubrel_names,
+ fetch_sequence_list(wrconn, sub->publications));
/* Get local table list. */
subrel_states = GetSubscriptionRelations(sub->oid);
@@ -1807,6 +1813,75 @@ fetch_table_list(WalReceiverConn *wrconn, List *publications)
return tablelist;
}
+/*
+ * Get the list of sequences which belong to specified publications on the
+ * publisher connection.
+ */
+static List *
+fetch_sequence_list(WalReceiverConn *wrconn, List *publications)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[2] = {TEXTOID, TEXTOID};
+ ListCell *lc;
+ bool first;
+ List *tablelist = NIL;
+
+ Assert(list_length(publications) > 0);
+
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT DISTINCT s.schemaname, s.sequencename\n"
+ " FROM pg_catalog.pg_publication_sequences s\n"
+ " WHERE s.pubname IN (");
+ first = true;
+ foreach(lc, publications)
+ {
+ char *pubname = strVal(lfirst(lc));
+
+ if (first)
+ first = false;
+ else
+ appendStringInfoString(&cmd, ", ");
+
+ appendStringInfoString(&cmd, quote_literal_cstr(pubname));
+ }
+ appendStringInfoChar(&cmd, ')');
+
+ res = walrcv_exec(wrconn, cmd.data, 2, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated sequences from the publisher: %s",
+ res->err)));
+
+ /* Process sequences. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ char *nspname;
+ char *relname;
+ bool isnull;
+ RangeVar *rv;
+
+ nspname = TextDatumGetCString(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+ relname = TextDatumGetCString(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ rv = makeRangeVar(nspname, relname, -1);
+ tablelist = lappend(tablelist, rv);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+
+ return tablelist;
+}
+
/*
* This is to report the connection failure while dropping replication slots.
* Here, we report the WARNING for all tablesync slots so that user can drop
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 90edd0bb97d..4dd545cdd20 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -42,6 +42,7 @@
#include "catalog/pg_inherits.h"
#include "catalog/pg_namespace.h"
#include "catalog/pg_opclass.h"
+#include "catalog/pg_publication_namespace.h"
#include "catalog/pg_statistic_ext.h"
#include "catalog/pg_tablespace.h"
#include "catalog/pg_trigger.h"
@@ -16408,11 +16409,14 @@ AlterTableNamespace(AlterObjectSchemaStmt *stmt, Oid *oldschema)
* Check that setting the relation to a different schema won't result in a
* publication having both a schema and the same schema's table, as this
* is not supported.
+ *
+ * XXX We do this for tables and sequences, but it's better to keep the two
+ * blocks separate, to make the strings easier to translate.
*/
if (stmt->objectType == OBJECT_TABLE)
{
ListCell *lc;
- List *schemaPubids = GetSchemaPublications(nspOid);
+ List *schemaPubids = GetSchemaPublications(nspOid, PUB_OBJTYPE_TABLE);
List *relPubids = GetRelationPublications(RelationGetRelid(rel));
foreach(lc, relPubids)
@@ -16430,6 +16434,27 @@ AlterTableNamespace(AlterObjectSchemaStmt *stmt, Oid *oldschema)
get_publication_name(pubid, false)));
}
}
+ else if (stmt->objectType == OBJECT_SEQUENCE)
+ {
+ ListCell *lc;
+ List *schemaPubids = GetSchemaPublications(nspOid, PUB_OBJTYPE_SEQUENCE);
+ List *relPubids = GetRelationPublications(RelationGetRelid(rel));
+
+ foreach(lc, relPubids)
+ {
+ Oid pubid = lfirst_oid(lc);
+
+ if (list_member_oid(schemaPubids, pubid))
+ ereport(ERROR,
+ errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ errmsg("cannot move sequence \"%s\" to schema \"%s\"",
+ RelationGetRelationName(rel), stmt->newschema),
+ errdetail("The schema \"%s\" and same schema's sequence \"%s\" cannot be part of the same publication \"%s\".",
+ stmt->newschema,
+ RelationGetRelationName(rel),
+ get_publication_name(pubid, false)));
+ }
+ }
/* common checks on switching namespaces */
CheckSetNamespace(oldNspOid, nspOid);
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 27989bd723d..228e3547012 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -649,7 +649,9 @@ void
CheckSubscriptionRelkind(char relkind, const char *nspname,
const char *relname)
{
- if (relkind != RELKIND_RELATION && relkind != RELKIND_PARTITIONED_TABLE)
+ if (relkind != RELKIND_RELATION &&
+ relkind != RELKIND_PARTITIONED_TABLE &&
+ relkind != RELKIND_SEQUENCE)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("cannot use relation \"%s.%s\" as logical replication target",
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 1585cf2d58c..46a1943d97a 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -5390,7 +5390,7 @@ _copyCreatePublicationStmt(const CreatePublicationStmt *from)
COPY_STRING_FIELD(pubname);
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
- COPY_SCALAR_FIELD(for_all_tables);
+ COPY_NODE_FIELD(for_all_objects);
return newnode;
}
@@ -5403,7 +5403,7 @@ _copyAlterPublicationStmt(const AlterPublicationStmt *from)
COPY_STRING_FIELD(pubname);
COPY_NODE_FIELD(options);
COPY_NODE_FIELD(pubobjects);
- COPY_SCALAR_FIELD(for_all_tables);
+ COPY_NODE_FIELD(for_all_objects);
COPY_SCALAR_FIELD(action);
return newnode;
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index caad20e0478..1f765f42c91 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2688,7 +2688,7 @@ _equalCreatePublicationStmt(const CreatePublicationStmt *a,
COMPARE_STRING_FIELD(pubname);
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
- COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_NODE_FIELD(for_all_objects);
return true;
}
@@ -2700,7 +2700,7 @@ _equalAlterPublicationStmt(const AlterPublicationStmt *a,
COMPARE_STRING_FIELD(pubname);
COMPARE_NODE_FIELD(options);
COMPARE_NODE_FIELD(pubobjects);
- COMPARE_SCALAR_FIELD(for_all_tables);
+ COMPARE_NODE_FIELD(for_all_objects);
COMPARE_SCALAR_FIELD(action);
return true;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index c9941d9cb4f..2cc92a89432 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -455,7 +455,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
transform_element_list transform_type_list
TriggerTransitions TriggerReferencing
vacuum_relation_list opt_vacuum_relation_list
- drop_option_list pub_obj_list
+ drop_option_list pub_obj_list pub_obj_type_list
%type <node> opt_routine_body
%type <groupclause> group_clause
@@ -588,6 +588,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
%type <node> var_value zone_value
%type <rolespec> auth_ident RoleSpec opt_granted_by
%type <publicationobjectspec> PublicationObjSpec
+%type <node> pub_obj_type
%type <keyword> unreserved_keyword type_func_name_keyword
%type <keyword> col_name_keyword reserved_keyword
@@ -9862,12 +9863,9 @@ AlterOwnerStmt: ALTER AGGREGATE aggregate_with_argtypes OWNER TO RoleSpec
*
* CREATE PUBLICATION FOR ALL TABLES [WITH options]
*
- * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
- *
- * pub_obj is one of:
+ * CREATE PUBLICATION FOR ALL SEQUENCES [WITH options]
*
- * TABLE table [, ...]
- * ALL TABLES IN SCHEMA schema [, ...]
+ * CREATE PUBLICATION FOR pub_obj [, ...] [WITH options]
*
*****************************************************************************/
@@ -9879,12 +9877,12 @@ CreatePublicationStmt:
n->options = $4;
$$ = (Node *)n;
}
- | CREATE PUBLICATION name FOR ALL TABLES opt_definition
+ | CREATE PUBLICATION name FOR ALL pub_obj_type_list opt_definition
{
CreatePublicationStmt *n = makeNode(CreatePublicationStmt);
n->pubname = $3;
n->options = $7;
- n->for_all_tables = true;
+ n->for_all_objects = $6;
$$ = (Node *)n;
}
| CREATE PUBLICATION name FOR pub_obj_list opt_definition
@@ -9934,6 +9932,26 @@ PublicationObjSpec:
$$->pubobjtype = PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA;
$$->location = @5;
}
+ | SEQUENCE relation_expr
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCE;
+ $$->pubtable = makeNode(PublicationTable);
+ $$->pubtable->relation = $2;
+ }
+ | ALL SEQUENCES IN_P SCHEMA ColId
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA;
+ $$->name = $5;
+ $$->location = @5;
+ }
+ | ALL SEQUENCES IN_P SCHEMA CURRENT_SCHEMA
+ {
+ $$ = makeNode(PublicationObjSpec);
+ $$->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA;
+ $$->location = @5;
+ }
| ColId opt_column_list OptWhereClause
{
$$ = makeNode(PublicationObjSpec);
@@ -9995,6 +10013,19 @@ pub_obj_list: PublicationObjSpec
{ $$ = lappend($1, $3); }
;
+pub_obj_type: TABLES
+ { $$ = (Node *) makeString("tables"); }
+ | SEQUENCES
+ { $$ = (Node *) makeString("sequences"); }
+ ;
+
+pub_obj_type_list: pub_obj_type
+ { $$ = list_make1($1); }
+ | pub_obj_type_list ',' pub_obj_type
+ { $$ = lappend($1, $3); }
+ ;
+
+
/*****************************************************************************
*
* ALTER PUBLICATION name SET ( options )
@@ -10005,11 +10036,6 @@ pub_obj_list: PublicationObjSpec
*
* ALTER PUBLICATION name SET pub_obj [, ...]
*
- * pub_obj is one of:
- *
- * TABLE table_name [, ...]
- * ALL TABLES IN SCHEMA schema_name [, ...]
- *
*****************************************************************************/
AlterPublicationStmt:
@@ -18731,7 +18757,8 @@ preprocess_pubobj_list(List *pubobjspec_list, core_yyscan_t yyscanner)
if (pubobj->pubobjtype == PUBLICATIONOBJ_CONTINUATION)
pubobj->pubobjtype = prevobjtype;
- if (pubobj->pubobjtype == PUBLICATIONOBJ_TABLE)
+ if (pubobj->pubobjtype == PUBLICATIONOBJ_TABLE ||
+ pubobj->pubobjtype == PUBLICATIONOBJ_SEQUENCE)
{
/* relation name or pubtable must be set for this type of object */
if (!pubobj->name && !pubobj->pubtable)
@@ -18782,6 +18809,30 @@ preprocess_pubobj_list(List *pubobjspec_list, core_yyscan_t yyscanner)
errmsg("invalid schema name at or near"),
parser_errposition(pubobj->location));
}
+ else if (pubobj->pubobjtype == PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA ||
+ pubobj->pubobjtype == PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA)
+ {
+ /* WHERE clause is not allowed on a schema object */
+ if (pubobj->pubtable && pubobj->pubtable->whereClause)
+ ereport(ERROR,
+ errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("WHERE clause not allowed for schema"),
+ parser_errposition(pubobj->location));
+
+ /*
+ * We can distinguish between the different type of schema
+ * objects based on whether name and pubtable is set.
+ */
+ if (pubobj->name)
+ pubobj->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA;
+ else if (!pubobj->name && !pubobj->pubtable)
+ pubobj->pubobjtype = PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA;
+ else
+ ereport(ERROR,
+ errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("invalid schema name at or near"),
+ parser_errposition(pubobj->location));
+ }
prevobjtype = pubobj->pubobjtype;
}
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 6303647fe0f..cf6464fa57d 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -42,6 +42,10 @@
#include "replication/reorderbuffer.h"
#include "replication/snapbuild.h"
#include "storage/standby.h"
+#include "commands/sequence.h"
+
+static void DecodeLogicalMessage(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
+static void DecodeLogicalSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
/* individual record(group)'s handlers */
static void DecodeInsert(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
@@ -558,6 +562,27 @@ FilterByOrigin(LogicalDecodingContext *ctx, RepOriginId origin_id)
*/
void
logicalmsg_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ XLogReaderState *r = buf->record;
+ uint8 info = XLogRecGetInfo(r) & ~XLR_INFO_MASK;
+
+ switch (info)
+ {
+ case XLOG_LOGICAL_MESSAGE:
+ DecodeLogicalMessage(ctx, buf);
+ break;
+
+ case XLOG_LOGICAL_SEQUENCE:
+ DecodeLogicalSequence(ctx, buf);
+ break;
+
+ default:
+ elog(ERROR, "unexpected RM_LOGICALMSG_ID record type: %u", info);
+ }
+}
+
+static void
+DecodeLogicalMessage(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
{
SnapBuild *builder = ctx->snapshot_builder;
XLogReaderState *r = buf->record;
@@ -1250,3 +1275,67 @@ DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
(txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||
ctx->fast_forward || FilterByOrigin(ctx, origin_id));
}
+
+/*
+ * Handle sequence decode
+ *
+ * Decoding sequences is a bit tricky, because while most sequence actions
+ * are non-transactional (not subject to rollback), some need to be handled
+ * as transactional.
+ *
+ * By default, a sequence increment is non-transactional - we must not queue
+ * it in a transaction as other changes, because the transaction might get
+ * rolled back and we'd discard the increment. The downstream would not be
+ * notified about the increment, which is wrong.
+ *
+ * On the other hand, the sequence may be created in a transaction. In this
+ * case we *should* queue the change as other changes in the transaction,
+ * because we don't want to send the increments for unknown sequence to the
+ * plugin - it might get confused about which sequence it's related to etc.
+ */
+void
+DecodeLogicalSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
+{
+ SnapBuild *builder = ctx->snapshot_builder;
+ RelFileNode target_node;
+ XLogReaderState *r = buf->record;
+ TransactionId xid = XLogRecGetXid(r);
+ uint8 info = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;
+ xl_logical_sequence *xlrec;
+ RepOriginId origin_id = XLogRecGetOrigin(r);
+
+ /* only decode changes flagged with XLOG_SEQ_LOG */
+ if (info != XLOG_LOGICAL_SEQUENCE)
+ elog(ERROR, "unexpected RM_SEQ_ID record type: %u", info);
+
+ ReorderBufferProcessXid(ctx->reorder, XLogRecGetXid(r), buf->origptr);
+
+ /*
+ * If we don't have snapshot or we are just fast-forwarding, there is no
+ * point in decoding messages.
+ */
+ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||
+ ctx->fast_forward)
+ return;
+
+ /* extract the WAL record, with "created" flag */
+ xlrec = (xl_logical_sequence *) XLogRecGetData(r);
+
+ /* only interested in our database */
+ target_node = xlrec->node;
+ if (target_node.dbNode != ctx->slot->data.database)
+ return;
+
+ /* output plugin doesn't look for this origin, no need to queue */
+ if (FilterByOrigin(ctx, XLogRecGetOrigin(r)))
+ return;
+
+ /* Skip the change if already processed (per the snapshot). */
+ if (!SnapBuildProcessChange(builder, xid, buf->origptr))
+ return;
+
+ /* Queue the increment (or send immediately if not transactional). */
+ ReorderBufferQueueSequence(ctx->reorder, xid, buf->endptr,
+ origin_id, xlrec->reloid, target_node,
+ xlrec->last, xlrec->log_cnt, xlrec->is_called);
+}
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index 788769dd738..94b94e4be0c 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -73,6 +73,9 @@ static void truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
static void message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ int64 last_value, int64 log_cnt, bool is_called);
/* streaming callbacks */
static void stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
@@ -90,6 +93,9 @@ static void stream_change_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn
static void stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr message_lsn, bool transactional,
const char *prefix, Size message_size, const char *message);
+static void stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ int64 last_value, int64 log_cnt, bool is_called);
static void stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[], ReorderBufferChange *change);
@@ -218,6 +224,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->apply_truncate = truncate_cb_wrapper;
ctx->reorder->commit = commit_cb_wrapper;
ctx->reorder->message = message_cb_wrapper;
+ ctx->reorder->sequence = sequence_cb_wrapper;
/*
* To support streaming, we require start/stop/abort/commit/change
@@ -234,6 +241,7 @@ StartupDecodingContext(List *output_plugin_options,
(ctx->callbacks.stream_commit_cb != NULL) ||
(ctx->callbacks.stream_change_cb != NULL) ||
(ctx->callbacks.stream_message_cb != NULL) ||
+ (ctx->callbacks.stream_sequence_cb != NULL) ||
(ctx->callbacks.stream_truncate_cb != NULL);
/*
@@ -251,6 +259,7 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder->stream_commit = stream_commit_cb_wrapper;
ctx->reorder->stream_change = stream_change_cb_wrapper;
ctx->reorder->stream_message = stream_message_cb_wrapper;
+ ctx->reorder->stream_sequence = stream_sequence_cb_wrapper;
ctx->reorder->stream_truncate = stream_truncate_cb_wrapper;
@@ -1205,6 +1214,42 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ if (ctx->callbacks.sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel,
+ last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
XLogRecPtr first_lsn)
@@ -1510,6 +1555,46 @@ stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
error_context_stack = errcallback.previous;
}
+static void
+stream_sequence_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn, Relation rel,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ LogicalDecodingContext *ctx = cache->private_data;
+ LogicalErrorCallbackState state;
+ ErrorContextCallback errcallback;
+
+ Assert(!ctx->fast_forward);
+
+ /* We're only supposed to call this when streaming is supported. */
+ Assert(ctx->streaming);
+
+ /* this callback is optional */
+ if (ctx->callbacks.stream_sequence_cb == NULL)
+ return;
+
+ /* Push callback + info on the error context stack */
+ state.ctx = ctx;
+ state.callback_name = "stream_sequence";
+ state.report_location = sequence_lsn;
+ errcallback.callback = output_plugin_error_callback;
+ errcallback.arg = (void *) &state;
+ errcallback.previous = error_context_stack;
+ error_context_stack = &errcallback;
+
+ /* set output state */
+ ctx->accept_writes = true;
+ ctx->write_xid = txn != NULL ? txn->xid : InvalidTransactionId;
+ ctx->write_location = sequence_lsn;
+
+ /* do the actual work: call callback */
+ ctx->callbacks.sequence_cb(ctx, txn, sequence_lsn, rel,
+ last_value, log_cnt, is_called);
+
+ /* Pop the error context stack */
+ error_context_stack = errcallback.previous;
+}
+
static void
stream_truncate_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
int nrelations, Relation relations[],
diff --git a/src/backend/replication/logical/message.c b/src/backend/replication/logical/message.c
index 1c34912610e..69178dc1cfb 100644
--- a/src/backend/replication/logical/message.c
+++ b/src/backend/replication/logical/message.c
@@ -82,7 +82,8 @@ logicalmsg_redo(XLogReaderState *record)
{
uint8 info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
- if (info != XLOG_LOGICAL_MESSAGE)
+ if (info != XLOG_LOGICAL_MESSAGE &&
+ info != XLOG_LOGICAL_SEQUENCE)
elog(PANIC, "logicalmsg_redo: unknown op code %u", info);
/* This is only interesting for logical decoding, see decode.c. */
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index ff8513e2d29..ae078eed048 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -662,6 +662,54 @@ logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
pq_sendbytes(out, message, sz);
}
+/*
+ * Write SEQUENCE to stream
+ */
+void
+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,
+ XLogRecPtr lsn,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ uint8 flags = 0;
+ char *relname;
+
+ pq_sendbyte(out, LOGICAL_REP_MSG_SEQUENCE);
+
+ /* transaction ID (if not valid, we're not streaming) */
+ if (TransactionIdIsValid(xid))
+ pq_sendint32(out, xid);
+
+ pq_sendint8(out, flags);
+ pq_sendint64(out, lsn);
+
+ logicalrep_write_namespace(out, RelationGetNamespace(rel));
+ relname = RelationGetRelationName(rel);
+ pq_sendstring(out, relname);
+
+ pq_sendint64(out, last_value);
+ pq_sendint64(out, log_cnt);
+ pq_sendint8(out, is_called);
+}
+
+/*
+ * Read SEQUENCE from the stream.
+ */
+void
+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)
+{
+ /* XXX skipping flags and lsn */
+ pq_getmsgint(in, 1);
+ pq_getmsgint64(in);
+
+ /* Read relation name from stream */
+ seqdata->nspname = pstrdup(logicalrep_read_namespace(in));
+ seqdata->seqname = pstrdup(pq_getmsgstring(in));
+
+ seqdata->last_value = pq_getmsgint64(in);
+ seqdata->log_cnt = pq_getmsgint64(in);
+ seqdata->is_called = pq_getmsgint(in, 1);
+}
+
/*
* Write relation description to the output stream.
*/
@@ -1236,6 +1284,8 @@ logicalrep_message_type(LogicalRepMsgType action)
return "STREAM ABORT";
case LOGICAL_REP_MSG_STREAM_PREPARE:
return "STREAM PREPARE";
+ case LOGICAL_REP_MSG_SEQUENCE:
+ return "SEQUENCE";
}
elog(ERROR, "invalid logical replication message type \"%c\"", action);
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 5adc016d449..0cc7c8d3bc4 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -77,6 +77,8 @@
* a bit more memory to the oldest subtransactions, because it's likely
* they are the source for the next sequence of changes.
*
+ * FIXME
+ *
* -------------------------------------------------------------------------
*/
#include "postgres.h"
@@ -91,6 +93,7 @@
#include "access/xact.h"
#include "access/xlog_internal.h"
#include "catalog/catalog.h"
+#include "commands/sequence.h"
#include "lib/binaryheap.h"
#include "miscadmin.h"
#include "pgstat.h"
@@ -536,6 +539,7 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change,
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
break;
}
@@ -866,6 +870,45 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
}
}
+/*
+ * A transactional sequence increment is queued to be processed upon commit
+ * and a non-transactional increment gets processed immediately.
+ *
+ * A sequence update may be both transactional and non-transactional. When
+ * created in a running transaction, treat it as transactional and queue
+ * the change in it. Otherwise treat it as non-transactional, so that we
+ * don't forget the increment in case of a rollback.
+ */
+void
+ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ XLogRecPtr lsn, RepOriginId origin_id,
+ Oid reloid, RelFileNode rnode,
+ int64 last, int64 log_cnt, bool is_called)
+{
+ MemoryContext oldcontext;
+ ReorderBufferChange *change;
+
+ /* OK, allocate and queue the change */
+ oldcontext = MemoryContextSwitchTo(rb->context);
+
+ change = ReorderBufferGetChange(rb);
+
+ change->action = REORDER_BUFFER_CHANGE_SEQUENCE;
+ change->origin_id = origin_id;
+
+ memcpy(&change->data.sequence.relnode, &rnode, sizeof(RelFileNode));
+
+ change->data.sequence.reloid = reloid;
+ change->data.sequence.last = last;
+ change->data.sequence.log_cnt = log_cnt;
+ change->data.sequence.is_called = is_called;
+
+ /* add it to the same subxact that created the sequence */
+ ReorderBufferQueueChange(rb, xid, lsn, change, false);
+
+ MemoryContextSwitchTo(oldcontext);
+}
+
/*
* AssertTXNLsnOrder
* Verify LSN ordering of transaction lists in the reorderbuffer
@@ -1957,6 +2000,33 @@ ReorderBufferApplyMessage(ReorderBuffer *rb, ReorderBufferTXN *txn,
change->data.msg.message);
}
+/*
+ * Helper function for ReorderBufferProcessTXN for applying sequences.
+ */
+static inline void
+ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
+ Relation relation, ReorderBufferChange *change,
+ bool streaming)
+{
+ int64 last_value, log_cnt;
+ bool is_called;
+/*
+ * FIXME is it possible that we write sequence state for T1 before T2, but
+ * then end up committing T2 first? If the sequence could be incremented
+ * in between, that might cause data corruption result.
+ */
+ last_value = change->data.sequence.last;
+ log_cnt = change->data.sequence.log_cnt;
+ is_called = change->data.sequence.is_called;
+
+ if (streaming)
+ rb->stream_sequence(rb, txn, change->lsn, relation,
+ last_value, log_cnt, is_called);
+ else
+ rb->sequence(rb, txn, change->lsn, relation,
+ last_value, log_cnt, is_called);
+}
+
/*
* Function to store the command id and snapshot at the end of the current
* stream so that we can reuse the same while sending the next stream.
@@ -2399,6 +2469,33 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
elog(ERROR, "tuplecid value in changequeue");
break;
+
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
+ Assert(snapshot_now);
+
+ /*
+ reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,
+ change->data.sequence.relnode.relNode);
+
+ if (reloid == InvalidOid)
+ elog(ERROR, "zz could not map filenode \"%s\" to relation OID",
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ */
+ relation = RelationIdGetRelation(change->data.sequence.reloid);
+
+ if (!RelationIsValid(relation))
+ elog(ERROR, "could not open relation with OID %u (for filenode \"%s\")",
+ reloid,
+ relpathperm(change->data.sequence.relnode,
+ MAIN_FORKNUM));
+
+ if (RelationIsLogicallyLogged(relation))
+ ReorderBufferApplySequence(rb, txn, relation, change, streaming);
+
+ RelationClose(relation);
+ break;
}
}
@@ -3789,6 +3886,7 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
/* ReorderBufferChange contains everything important */
break;
}
@@ -4053,6 +4151,7 @@ ReorderBufferChangeSize(ReorderBufferChange *change)
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
/* ReorderBufferChange contains everything important */
break;
}
@@ -4352,6 +4451,7 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:
case REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:
case REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:
+ case REORDER_BUFFER_CHANGE_SEQUENCE:
break;
}
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 49ceec3bdc8..f767820cb6c 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -100,6 +100,7 @@
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_type.h"
#include "commands/copy.h"
+#include "commands/sequence.h"
#include "miscadmin.h"
#include "parser/parse_relation.h"
#include "pgstat.h"
@@ -1136,6 +1137,94 @@ copy_table(Relation rel)
logicalrep_rel_close(relmapentry, NoLock);
}
+/*
+ * Fetch sequence data (current state) from the remote node.
+ */
+static void
+fetch_sequence_data(char *nspname, char *relname,
+ int64 *last_value, int64 *log_cnt, bool *is_called)
+{
+ WalRcvExecResult *res;
+ StringInfoData cmd;
+ TupleTableSlot *slot;
+ Oid tableRow[3] = {INT8OID, INT8OID, BOOLOID};
+
+ initStringInfo(&cmd);
+ appendStringInfo(&cmd, "SELECT last_value, log_cnt, is_called\n"
+ " FROM %s", quote_qualified_identifier(nspname, relname));
+
+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 3, tableRow);
+ pfree(cmd.data);
+
+ if (res->status != WALRCV_OK_TUPLES)
+ ereport(ERROR,
+ (errmsg("could not receive list of replicated tables from the publisher: %s",
+ res->err)));
+
+ /* Process the sequence. */
+ slot = MakeSingleTupleTableSlot(res->tupledesc, &TTSOpsMinimalTuple);
+ while (tuplestore_gettupleslot(res->tuplestore, true, false, slot))
+ {
+ bool isnull;
+
+ *last_value = DatumGetInt64(slot_getattr(slot, 1, &isnull));
+ Assert(!isnull);
+
+ *log_cnt = DatumGetInt64(slot_getattr(slot, 2, &isnull));
+ Assert(!isnull);
+
+ *is_called = DatumGetBool(slot_getattr(slot, 3, &isnull));
+ Assert(!isnull);
+
+ ExecClearTuple(slot);
+ }
+ ExecDropSingleTupleTableSlot(slot);
+
+ walrcv_clear_result(res);
+}
+
+/*
+ * Copy existing data of a sequence from publisher.
+ *
+ * Caller is responsible for locking the local relation.
+ */
+static void
+copy_sequence(Relation rel)
+{
+ LogicalRepRelMapEntry *relmapentry;
+ LogicalRepRelation lrel;
+ List *qual = NIL;
+ StringInfoData cmd;
+ int64 last_value = 0,
+ log_cnt = 0;
+ bool is_called = 0;
+
+ /* Get the publisher relation info. */
+ fetch_remote_table_info(get_namespace_name(RelationGetNamespace(rel)),
+ RelationGetRelationName(rel), &lrel, &qual);
+
+ /* sequences don't have row filters */
+ Assert(!qual);
+
+ /* Put the relation into relmap. */
+ logicalrep_relmap_update(&lrel);
+
+ /* Map the publisher relation to local one. */
+ relmapentry = logicalrep_rel_open(lrel.remoteid, NoLock);
+ Assert(rel == relmapentry->localrel);
+
+ /* Start copy on the publisher. */
+ initStringInfo(&cmd);
+
+ Assert(lrel.relkind == RELKIND_SEQUENCE);
+
+ fetch_sequence_data(lrel.nspname, lrel.relname, &last_value, &log_cnt, &is_called);
+
+ SetSequence(RelationGetRelid(rel), last_value, log_cnt, is_called);
+
+ logicalrep_rel_close(relmapentry, NoLock);
+}
+
/*
* Determine the tablesync slot name.
*
@@ -1397,10 +1486,21 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)
originname)));
}
- /* Now do the initial data copy */
- PushActiveSnapshot(GetTransactionSnapshot());
- copy_table(rel);
- PopActiveSnapshot();
+ /* Do the right action depending on the relation kind. */
+ if (get_rel_relkind(RelationGetRelid(rel)) == RELKIND_SEQUENCE)
+ {
+ /* Now do the initial sequence copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_sequence(rel);
+ PopActiveSnapshot();
+ }
+ else
+ {
+ /* Now do the initial data copy */
+ PushActiveSnapshot(GetTransactionSnapshot());
+ copy_table(rel);
+ PopActiveSnapshot();
+ }
res = walrcv_exec(LogRepWorkerWalRcvConn, "COMMIT", 0, NULL);
if (res->status != WALRCV_OK_COMMAND)
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 9181d3e8636..71fe9390f0d 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -143,6 +143,7 @@
#include "catalog/pg_subscription.h"
#include "catalog/pg_subscription_rel.h"
#include "catalog/pg_tablespace.h"
+#include "commands/sequence.h"
#include "commands/tablecmds.h"
#include "commands/tablespace.h"
#include "commands/trigger.h"
@@ -1143,6 +1144,47 @@ apply_handle_origin(StringInfo s)
errmsg_internal("ORIGIN message sent out of order")));
}
+/*
+ * Handle SEQUENCE message.
+ */
+static void
+apply_handle_sequence(StringInfo s)
+{
+ LogicalRepSequence seq;
+ Oid relid;
+
+ if (handle_streamed_transaction(LOGICAL_REP_MSG_SEQUENCE, s))
+ return;
+
+ logicalrep_read_sequence(s, &seq);
+
+ /*
+ * We should be in remote transaction.
+ */
+ Assert(in_remote_transaction);
+
+ /*
+ * Make sure we're in a transaction (needed by SetSequence). For
+ * non-transactional updates we're guaranteed to start a new one,
+ * and we'll commit it at the end.
+ */
+ if (!IsTransactionState())
+ {
+ StartTransactionCommand();
+ maybe_reread_subscription();
+ }
+
+ relid = RangeVarGetRelid(makeRangeVar(seq.nspname,
+ seq.seqname, -1),
+ RowExclusiveLock, false);
+
+ /* lock the sequence in AccessExclusiveLock, as expected by SetSequence */
+ LockRelationOid(relid, AccessExclusiveLock);
+
+ /* apply the sequence change */
+ SetSequence(relid, seq.last_value, seq.log_cnt, seq.is_called);
+}
+
/*
* Handle STREAM START message.
*/
@@ -2511,6 +2553,10 @@ apply_dispatch(StringInfo s)
*/
break;
+ case LOGICAL_REP_MSG_SEQUENCE:
+ apply_handle_sequence(s);
+ return;
+
case LOGICAL_REP_MSG_STREAM_START:
apply_handle_stream_start(s);
break;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index fe5accca576..31be8064ebe 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -15,6 +15,7 @@
#include "access/tupconvert.h"
#include "catalog/partition.h"
#include "catalog/pg_publication.h"
+#include "catalog/pg_publication_namespace.h"
#include "catalog/pg_publication_rel.h"
#include "commands/defrem.h"
#include "executor/executor.h"
@@ -54,6 +55,10 @@ static void pgoutput_message(LogicalDecodingContext *ctx,
ReorderBufferTXN *txn, XLogRecPtr message_lsn,
bool transactional, const char *prefix,
Size sz, const char *message);
+static void pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation relation,
+ int64 last_value, int64 log_cnt, bool is_called);
static bool pgoutput_origin_filter(LogicalDecodingContext *ctx,
RepOriginId origin_id);
static void pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,
@@ -255,6 +260,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->change_cb = pgoutput_change;
cb->truncate_cb = pgoutput_truncate;
cb->message_cb = pgoutput_message;
+ cb->sequence_cb = pgoutput_sequence;
cb->commit_cb = pgoutput_commit_txn;
cb->begin_prepare_cb = pgoutput_begin_prepare_txn;
@@ -271,6 +277,7 @@ _PG_output_plugin_init(OutputPluginCallbacks *cb)
cb->stream_commit_cb = pgoutput_stream_commit;
cb->stream_change_cb = pgoutput_change;
cb->stream_message_cb = pgoutput_message;
+ cb->stream_sequence_cb = pgoutput_sequence;
cb->stream_truncate_cb = pgoutput_truncate;
/* transaction streaming - two-phase commit */
cb->stream_prepare_cb = pgoutput_stream_prepare_txn;
@@ -284,6 +291,7 @@ parse_output_parameters(List *options, PGOutputData *data)
bool publication_names_given = false;
bool binary_option_given = false;
bool messages_option_given = false;
+ bool sequences_option_given = false;
bool streaming_given = false;
bool two_phase_option_given = false;
@@ -291,6 +299,7 @@ parse_output_parameters(List *options, PGOutputData *data)
data->streaming = false;
data->messages = false;
data->two_phase = false;
+ data->sequences = true;
foreach(lc, options)
{
@@ -359,6 +368,16 @@ parse_output_parameters(List *options, PGOutputData *data)
data->messages = defGetBoolean(defel);
}
+ else if (strcmp(defel->defname, "sequences") == 0)
+ {
+ if (sequences_option_given)
+ ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg("conflicting or redundant options")));
+ sequences_option_given = true;
+
+ data->sequences = defGetBoolean(defel);
+ }
else if (strcmp(defel->defname, "streaming") == 0)
{
if (streaming_given)
@@ -1690,6 +1709,55 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
OutputPluginWrite(ctx, true);
}
+static void
+pgoutput_sequence(LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn, XLogRecPtr sequence_lsn,
+ Relation relation,
+ int64 last_value, int64 log_cnt, bool is_called)
+{
+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;
+ TransactionId xid = InvalidTransactionId;
+ RelationSyncEntry *relentry;
+ PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;
+
+ if (!data->sequences)
+ return;
+
+ if (!is_publishable_relation(relation))
+ return;
+
+ /*
+ * Remember the xid for the message in streaming mode. See
+ * pgoutput_change.
+ */
+ if (in_streaming)
+ xid = txn->xid;
+
+ relentry = get_rel_sync_entry(data, relation);
+
+ /*
+ * First check the sequence filter.
+ *
+ * We handle just REORDER_BUFFER_CHANGE_SEQUENCE here.
+ */
+ if (!relentry->pubactions.pubsequence)
+ return;
+
+ /* Send BEGIN if we haven't yet */
+ if (txndata && !txndata->sent_begin_txn)
+ pgoutput_send_begin(ctx, txn);
+
+ OutputPluginPrepareWrite(ctx, true);
+ logicalrep_write_sequence(ctx->out,
+ relation,
+ xid,
+ sequence_lsn,
+ last_value,
+ log_cnt,
+ is_called);
+ OutputPluginWrite(ctx, true);
+}
+
/*
* Currently we always forward.
*/
@@ -1975,7 +2043,8 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->schema_sent = false;
entry->streamed_txns = NIL;
entry->pubactions.pubinsert = entry->pubactions.pubupdate =
- entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubdelete = entry->pubactions.pubtruncate =
+ entry->pubactions.pubsequence = false;
entry->new_slot = NULL;
entry->old_slot = NULL;
memset(entry->exprstate, 0, sizeof(entry->exprstate));
@@ -1990,18 +2059,18 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
{
Oid schemaId = get_rel_namespace(relid);
List *pubids = GetRelationPublications(relid);
-
+ char relkind = get_rel_relkind(relid);
+ char objectType = pub_get_object_type_for_relkind(relkind);
/*
* We don't acquire a lock on the namespace system table as we build
* the cache entry using a historic snapshot and all the later changes
* are absorbed while decoding WAL.
*/
- List *schemaPubids = GetSchemaPublications(schemaId);
+ List *schemaPubids = GetSchemaPublications(schemaId, objectType);
ListCell *lc;
Oid publish_as_relid = relid;
int publish_ancestor_level = 0;
bool am_partition = get_rel_relispartition(relid);
- char relkind = get_rel_relkind(relid);
List *rel_publications = NIL;
/* Reload publications if needed before use. */
@@ -2033,6 +2102,7 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->pubactions.pubupdate = false;
entry->pubactions.pubdelete = false;
entry->pubactions.pubtruncate = false;
+ entry->pubactions.pubsequence = false;
/*
* Tuple slots cleanups. (Will be rebuilt later if needed).
@@ -2080,9 +2150,11 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
/*
* If this is a FOR ALL TABLES publication, pick the partition root
- * and set the ancestor level accordingly.
+ * and set the ancestor level accordingly. If this is a FOR ALL
+ * SEQUENCES publication, we publish it too but we don't need to
+ * pick the partition root etc.
*/
- if (pub->alltables)
+ if (pub->alltables || pub->allsequences)
{
publish = true;
if (pub->pubviaroot && am_partition)
@@ -2146,6 +2218,7 @@ get_rel_sync_entry(PGOutputData *data, Relation relation)
entry->pubactions.pubupdate |= pub->pubactions.pubupdate;
entry->pubactions.pubdelete |= pub->pubactions.pubdelete;
entry->pubactions.pubtruncate |= pub->pubactions.pubtruncate;
+ entry->pubactions.pubsequence |= pub->pubactions.pubsequence;
/*
* We want to publish the changes as the top-most ancestor
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 43f14c233d6..1f29670a131 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -56,6 +56,7 @@
#include "catalog/pg_opclass.h"
#include "catalog/pg_proc.h"
#include "catalog/pg_publication.h"
+#include "catalog/pg_publication_namespace.h"
#include "catalog/pg_rewrite.h"
#include "catalog/pg_shseclabel.h"
#include "catalog/pg_statistic_ext.h"
@@ -5567,6 +5568,8 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
Oid schemaid;
List *ancestors = NIL;
Oid relid = RelationGetRelid(relation);
+ char relkind = relation->rd_rel->relkind;
+ char objType;
/*
* If not publishable, it publishes no actions. (pgoutput_change() will
@@ -5597,8 +5600,15 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
/* Fetch the publication membership info. */
puboids = GetRelationPublications(relid);
schemaid = RelationGetNamespace(relation);
- puboids = list_concat_unique_oid(puboids, GetSchemaPublications(schemaid));
+ objType = pub_get_object_type_for_relkind(relkind);
+ puboids = list_concat_unique_oid(puboids,
+ GetSchemaPublications(schemaid, objType));
+
+ /*
+ * If this is a partion (and thus a table), lookup all ancestors and track
+ * all publications them too.
+ */
if (relation->rd_rel->relispartition)
{
/* Add publications that the ancestors are in too. */
@@ -5610,12 +5620,23 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
puboids = list_concat_unique_oid(puboids,
GetRelationPublications(ancestor));
+
+ /* include all publications publishing schema of all ancestors */
schemaid = get_rel_namespace(ancestor);
puboids = list_concat_unique_oid(puboids,
- GetSchemaPublications(schemaid));
+ GetSchemaPublications(schemaid,
+ PUB_OBJTYPE_TABLE));
}
}
- puboids = list_concat_unique_oid(puboids, GetAllTablesPublications());
+
+ /*
+ * Consider also FOR ALL TABLES and FOR ALL SEQUENCES publications,
+ * depending on the relkind of the relation.
+ */
+ if (relation->rd_rel->relkind == RELKIND_SEQUENCE)
+ puboids = list_concat_unique_oid(puboids, GetAllSequencesPublications());
+ else
+ puboids = list_concat_unique_oid(puboids, GetAllTablesPublications());
foreach(lc, puboids)
{
@@ -5634,6 +5655,7 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
pubdesc->pubactions.pubupdate |= pubform->pubupdate;
pubdesc->pubactions.pubdelete |= pubform->pubdelete;
pubdesc->pubactions.pubtruncate |= pubform->pubtruncate;
+ pubdesc->pubactions.pubsequence |= pubform->pubsequence;
/*
* Check if all columns referenced in the filter expression are part of
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index 1912b121463..8d265f2d23c 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -653,12 +653,12 @@ static const struct cachedesc cacheinfo[] = {
64
},
{PublicationNamespaceRelationId, /* PUBLICATIONNAMESPACEMAP */
- PublicationNamespacePnnspidPnpubidIndexId,
- 2,
+ PublicationNamespacePnnspidPnpubidPntypeIndexId,
+ 3,
{
Anum_pg_publication_namespace_pnnspid,
Anum_pg_publication_namespace_pnpubid,
- 0,
+ Anum_pg_publication_namespace_pntype,
0
},
64
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 196f6d23a3e..3c2201a725f 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -3814,10 +3814,12 @@ getPublications(Archive *fout, int *numPublications)
int i_pubname;
int i_pubowner;
int i_puballtables;
+ int i_puballsequences;
int i_pubinsert;
int i_pubupdate;
int i_pubdelete;
int i_pubtruncate;
+ int i_pubsequence;
int i_pubviaroot;
int i,
ntups;
@@ -3833,23 +3835,29 @@ getPublications(Archive *fout, int *numPublications)
resetPQExpBuffer(query);
/* Get the publications. */
- if (fout->remoteVersion >= 130000)
+ if (fout->remoteVersion >= 150000)
+ appendPQExpBuffer(query,
+ "SELECT p.tableoid, p.oid, p.pubname, "
+ "p.pubowner, "
+ "p.puballtables, p.puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubsequence, p.pubviaroot "
+ "FROM pg_publication p");
+ else if (fout->remoteVersion >= 130000)
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot "
+ "p.puballtables, false AS puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubsequence, p.pubviaroot "
"FROM pg_publication p");
else if (fout->remoteVersion >= 110000)
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot "
+ "p.puballtables, false AS puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubsequence, false AS pubviaroot "
"FROM pg_publication p");
else
appendPQExpBuffer(query,
"SELECT p.tableoid, p.oid, p.pubname, "
"p.pubowner, "
- "p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot "
+ "p.puballtables, false AS puballsequences, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubsequence, false AS pubviaroot "
"FROM pg_publication p");
res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -3861,10 +3869,12 @@ getPublications(Archive *fout, int *numPublications)
i_pubname = PQfnumber(res, "pubname");
i_pubowner = PQfnumber(res, "pubowner");
i_puballtables = PQfnumber(res, "puballtables");
+ i_puballsequences = PQfnumber(res, "puballsequences");
i_pubinsert = PQfnumber(res, "pubinsert");
i_pubupdate = PQfnumber(res, "pubupdate");
i_pubdelete = PQfnumber(res, "pubdelete");
i_pubtruncate = PQfnumber(res, "pubtruncate");
+ i_pubsequence = PQfnumber(res, "pubsequence");
i_pubviaroot = PQfnumber(res, "pubviaroot");
pubinfo = pg_malloc(ntups * sizeof(PublicationInfo));
@@ -3880,6 +3890,8 @@ getPublications(Archive *fout, int *numPublications)
pubinfo[i].rolname = getRoleName(PQgetvalue(res, i, i_pubowner));
pubinfo[i].puballtables =
(strcmp(PQgetvalue(res, i, i_puballtables), "t") == 0);
+ pubinfo[i].puballsequences =
+ (strcmp(PQgetvalue(res, i, i_puballsequences), "t") == 0);
pubinfo[i].pubinsert =
(strcmp(PQgetvalue(res, i, i_pubinsert), "t") == 0);
pubinfo[i].pubupdate =
@@ -3888,6 +3900,8 @@ getPublications(Archive *fout, int *numPublications)
(strcmp(PQgetvalue(res, i, i_pubdelete), "t") == 0);
pubinfo[i].pubtruncate =
(strcmp(PQgetvalue(res, i, i_pubtruncate), "t") == 0);
+ pubinfo[i].pubsequence =
+ (strcmp(PQgetvalue(res, i, i_pubsequence), "t") == 0);
pubinfo[i].pubviaroot =
(strcmp(PQgetvalue(res, i, i_pubviaroot), "t") == 0);
@@ -3933,6 +3947,9 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
if (pubinfo->puballtables)
appendPQExpBufferStr(query, " FOR ALL TABLES");
+ if (pubinfo->puballsequences)
+ appendPQExpBufferStr(query, " FOR ALL SEQUENCES");
+
appendPQExpBufferStr(query, " WITH (publish = '");
if (pubinfo->pubinsert)
{
@@ -3967,6 +3984,15 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)
first = false;
}
+ if (pubinfo->pubsequence)
+ {
+ if (!first)
+ appendPQExpBufferStr(query, ", ");
+
+ appendPQExpBufferStr(query, "sequence");
+ first = false;
+ }
+
appendPQExpBufferStr(query, "'");
if (pubinfo->pubviaroot)
@@ -4013,6 +4039,7 @@ getPublicationNamespaces(Archive *fout)
int i_oid;
int i_pnpubid;
int i_pnnspid;
+ int i_pntype;
int i,
j,
ntups;
@@ -4024,7 +4051,7 @@ getPublicationNamespaces(Archive *fout)
/* Collect all publication membership info. */
appendPQExpBufferStr(query,
- "SELECT tableoid, oid, pnpubid, pnnspid "
+ "SELECT tableoid, oid, pnpubid, pnnspid, pntype "
"FROM pg_catalog.pg_publication_namespace");
res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
@@ -4034,6 +4061,7 @@ getPublicationNamespaces(Archive *fout)
i_oid = PQfnumber(res, "oid");
i_pnpubid = PQfnumber(res, "pnpubid");
i_pnnspid = PQfnumber(res, "pnnspid");
+ i_pntype = PQfnumber(res, "pntype");
/* this allocation may be more than we need */
pubsinfo = pg_malloc(ntups * sizeof(PublicationSchemaInfo));
@@ -4043,6 +4071,7 @@ getPublicationNamespaces(Archive *fout)
{
Oid pnpubid = atooid(PQgetvalue(res, i, i_pnpubid));
Oid pnnspid = atooid(PQgetvalue(res, i, i_pnnspid));
+ char pntype = PQgetvalue(res, i, i_pntype)[0];
PublicationInfo *pubinfo;
NamespaceInfo *nspinfo;
@@ -4074,6 +4103,7 @@ getPublicationNamespaces(Archive *fout)
pubsinfo[j].dobj.name = nspinfo->dobj.name;
pubsinfo[j].publication = pubinfo;
pubsinfo[j].pubschema = nspinfo;
+ pubsinfo[j].pubtype = pntype;
/* Decide whether we want to dump it */
selectDumpablePublicationObject(&(pubsinfo[j].dobj), fout);
@@ -4239,7 +4269,11 @@ dumpPublicationNamespace(Archive *fout, const PublicationSchemaInfo *pubsinfo)
query = createPQExpBuffer();
appendPQExpBuffer(query, "ALTER PUBLICATION %s ", fmtId(pubinfo->dobj.name));
- appendPQExpBuffer(query, "ADD ALL TABLES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
+
+ if (pubsinfo->pubtype == 't')
+ appendPQExpBuffer(query, "ADD ALL TABLES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
+ else
+ appendPQExpBuffer(query, "ADD ALL SEQUENCES IN SCHEMA %s;\n", fmtId(schemainfo->dobj.name));
/*
* There is no point in creating drop query as the drop is done by schema
@@ -4272,6 +4306,7 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
TableInfo *tbinfo = pubrinfo->pubtable;
PQExpBuffer query;
char *tag;
+ char *description;
/* Do nothing in data-only dump */
if (dopt->dataOnly)
@@ -4281,8 +4316,19 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
query = createPQExpBuffer();
- appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD TABLE ONLY",
- fmtId(pubinfo->dobj.name));
+ if (tbinfo->relkind == RELKIND_SEQUENCE)
+ {
+ appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD SEQUENCE",
+ fmtId(pubinfo->dobj.name));
+ description = "PUBLICATION SEQUENCE";
+ }
+ else
+ {
+ appendPQExpBuffer(query, "ALTER PUBLICATION %s ADD TABLE ONLY",
+ fmtId(pubinfo->dobj.name));
+ description = "PUBLICATION TABLE";
+ }
+
appendPQExpBuffer(query, " %s",
fmtQualifiedDumpable(tbinfo));
@@ -4311,7 +4357,7 @@ dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo)
ARCHIVE_OPTS(.tag = tag,
.namespace = tbinfo->dobj.namespace->dobj.name,
.owner = pubinfo->rolname,
- .description = "PUBLICATION TABLE",
+ .description = description,
.section = SECTION_POST_DATA,
.createStmt = query->data));
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 1d21c2906f1..688093c55e0 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -615,10 +615,12 @@ typedef struct _PublicationInfo
DumpableObject dobj;
const char *rolname;
bool puballtables;
+ bool puballsequences;
bool pubinsert;
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
bool pubviaroot;
} PublicationInfo;
@@ -644,6 +646,7 @@ typedef struct _PublicationSchemaInfo
DumpableObject dobj;
PublicationInfo *publication;
NamespaceInfo *pubschema;
+ char pubtype;
} PublicationSchemaInfo;
/*
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index c65c92bfb0e..75b754a4202 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -2420,7 +2420,7 @@ my %tests = (
create_order => 50,
create_sql => 'CREATE PUBLICATION pub1;',
regexp => qr/^
- \QCREATE PUBLICATION pub1 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub1 WITH (publish = 'insert, update, delete, truncate, sequence');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
@@ -2440,16 +2440,27 @@ my %tests = (
create_order => 50,
create_sql => 'CREATE PUBLICATION pub3;',
regexp => qr/^
- \QCREATE PUBLICATION pub3 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub3 WITH (publish = 'insert, update, delete, truncate, sequence');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
'CREATE PUBLICATION pub4' => {
create_order => 50,
- create_sql => 'CREATE PUBLICATION pub4;',
+ create_sql => 'CREATE PUBLICATION pub4
+ FOR ALL SEQUENCES
+ WITH (publish = \'\');',
regexp => qr/^
- \QCREATE PUBLICATION pub4 WITH (publish = 'insert, update, delete, truncate');\E
+ \QCREATE PUBLICATION pub4 FOR ALL SEQUENCES WITH (publish = '');\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ },
+
+ 'CREATE PUBLICATION pub5' => {
+ create_order => 50,
+ create_sql => 'CREATE PUBLICATION pub5;',
+ regexp => qr/^
+ \QCREATE PUBLICATION pub5 WITH (publish = 'insert, update, delete, truncate, sequence');\E
/xm,
like => { %full_runs, section_post_data => 1, },
},
@@ -2558,6 +2569,27 @@ my %tests = (
unlike => { exclude_dump_test_schema => 1, },
},
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test' => {
+ create_order => 51,
+ create_sql =>
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test;',
+ regexp => qr/^
+ \QALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA dump_test;\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ unlike => { exclude_dump_test_schema => 1, },
+ },
+
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public' => {
+ create_order => 52,
+ create_sql =>
+ 'ALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public;',
+ regexp => qr/^
+ \QALTER PUBLICATION pub3 ADD ALL SEQUENCES IN SCHEMA public;\E
+ /xm,
+ like => { %full_runs, section_post_data => 1, },
+ },
+
'CREATE SCHEMA public' => {
regexp => qr/^CREATE SCHEMA public;/m,
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index 3f1b3802c22..73bbbe2eb40 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1633,28 +1633,19 @@ describeOneTableDetails(const char *schemaname,
if (tableinfo.relkind == RELKIND_SEQUENCE)
{
PGresult *result = NULL;
- printQueryOpt myopt = pset.popt;
- char *footers[2] = {NULL, NULL};
if (pset.sversion >= 100000)
{
printfPQExpBuffer(&buf,
- "SELECT pg_catalog.format_type(seqtypid, NULL) AS \"%s\",\n"
- " seqstart AS \"%s\",\n"
- " seqmin AS \"%s\",\n"
- " seqmax AS \"%s\",\n"
- " seqincrement AS \"%s\",\n"
- " CASE WHEN seqcycle THEN '%s' ELSE '%s' END AS \"%s\",\n"
- " seqcache AS \"%s\"\n",
- gettext_noop("Type"),
- gettext_noop("Start"),
- gettext_noop("Minimum"),
- gettext_noop("Maximum"),
- gettext_noop("Increment"),
+ "SELECT pg_catalog.format_type(seqtypid, NULL),\n"
+ " seqstart,\n"
+ " seqmin,\n"
+ " seqmax,\n"
+ " seqincrement,\n"
+ " CASE WHEN seqcycle THEN '%s' ELSE '%s' END,\n"
+ " seqcache\n",
gettext_noop("yes"),
- gettext_noop("no"),
- gettext_noop("Cycles?"),
- gettext_noop("Cache"));
+ gettext_noop("no"));
appendPQExpBuffer(&buf,
"FROM pg_catalog.pg_sequence\n"
"WHERE seqrelid = '%s';",
@@ -1663,22 +1654,15 @@ describeOneTableDetails(const char *schemaname,
else
{
printfPQExpBuffer(&buf,
- "SELECT 'bigint' AS \"%s\",\n"
- " start_value AS \"%s\",\n"
- " min_value AS \"%s\",\n"
- " max_value AS \"%s\",\n"
- " increment_by AS \"%s\",\n"
- " CASE WHEN is_cycled THEN '%s' ELSE '%s' END AS \"%s\",\n"
- " cache_value AS \"%s\"\n",
- gettext_noop("Type"),
- gettext_noop("Start"),
- gettext_noop("Minimum"),
- gettext_noop("Maximum"),
- gettext_noop("Increment"),
+ "SELECT 'bigint',\n"
+ " start_value,\n"
+ " min_value,\n"
+ " max_value,\n"
+ " increment_by,\n"
+ " CASE WHEN is_cycled THEN '%s' ELSE '%s' END,\n"
+ " cache_value\n",
gettext_noop("yes"),
- gettext_noop("no"),
- gettext_noop("Cycles?"),
- gettext_noop("Cache"));
+ gettext_noop("no"));
appendPQExpBuffer(&buf, "FROM %s", fmtId(schemaname));
/* must be separate because fmtId isn't reentrant */
appendPQExpBuffer(&buf, ".%s;", fmtId(relationname));
@@ -1688,6 +1672,57 @@ describeOneTableDetails(const char *schemaname,
if (!res)
goto error_return;
+ numrows = PQntuples(res);
+
+ /* XXX reset to use expanded output for sequences (maybe we should
+ * keep this disabled, just like for tables?) */
+ myopt.expanded = pset.popt.topt.expanded;
+
+ printTableInit(&cont, &myopt, title.data, 7, numrows);
+ printTableInitialized = true;
+
+ if (tableinfo.relpersistence == 'u')
+ printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
+ schemaname, relationname);
+ else
+ printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
+ schemaname, relationname);
+
+ printTableAddHeader(&cont, gettext_noop("Type"), true, 'l');
+ printTableAddHeader(&cont, gettext_noop("Start"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Minimum"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Maximum"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Increment"), true, 'r');
+ printTableAddHeader(&cont, gettext_noop("Cycles?"), true, 'l');
+ printTableAddHeader(&cont, gettext_noop("Cache"), true, 'r');
+
+ /* Generate table cells to be printed */
+ for (i = 0; i < numrows; i++)
+ {
+ /* Type */
+ printTableAddCell(&cont, PQgetvalue(res, i, 0), false, false);
+
+ /* Start */
+ printTableAddCell(&cont, PQgetvalue(res, i, 1), false, false);
+
+ /* Minimum */
+ printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
+
+ /* Maximum */
+ printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
+
+ /* Increment */
+ printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
+
+ /* Cycles? */
+ printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
+
+ /* Cache */
+ printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+ }
+
+ /* Footer information about a sequence */
+
/* Get the column that owns this sequence */
printfPQExpBuffer(&buf, "SELECT pg_catalog.quote_ident(nspname) || '.' ||"
"\n pg_catalog.quote_ident(relname) || '.' ||"
@@ -1719,33 +1754,63 @@ describeOneTableDetails(const char *schemaname,
switch (PQgetvalue(result, 0, 1)[0])
{
case 'a':
- footers[0] = psprintf(_("Owned by: %s"),
- PQgetvalue(result, 0, 0));
+ printTableAddFooter(&cont,
+ psprintf(_("Owned by: %s"),
+ PQgetvalue(result, 0, 0)));
break;
case 'i':
- footers[0] = psprintf(_("Sequence for identity column: %s"),
- PQgetvalue(result, 0, 0));
+ printTableAddFooter(&cont,
+ psprintf(_("Sequence for identity column: %s"),
+ PQgetvalue(result, 0, 0)));
break;
}
}
PQclear(result);
- if (tableinfo.relpersistence == 'u')
- printfPQExpBuffer(&title, _("Unlogged sequence \"%s.%s\""),
- schemaname, relationname);
- else
- printfPQExpBuffer(&title, _("Sequence \"%s.%s\""),
- schemaname, relationname);
+ /* print any publications */
+ if (pset.sversion >= 150000)
+ {
+ int tuples = 0;
- myopt.footers = footers;
- myopt.topt.default_footer = false;
- myopt.title = title.data;
- myopt.translate_header = true;
+ printfPQExpBuffer(&buf,
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ " JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
+ " JOIN pg_catalog.pg_class pc ON pc.relnamespace = pn.pnnspid\n"
+ "WHERE pc.oid ='%s' and pn.pntype = 's' and pg_catalog.pg_relation_is_publishable('%s')\n"
+ "UNION\n"
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ " JOIN pg_catalog.pg_publication_rel pr ON p.oid = pr.prpubid\n"
+ "WHERE pr.prrelid = '%s'\n"
+ "UNION\n"
+ "SELECT pubname\n"
+ "FROM pg_catalog.pg_publication p\n"
+ "WHERE p.puballsequences AND pg_catalog.pg_relation_is_publishable('%s')\n"
+ "ORDER BY 1;",
+ oid, oid, oid, oid);
- printQuery(res, &myopt, pset.queryFout, false, pset.logfile);
+ result = PSQLexec(buf.data);
+ if (!result)
+ goto error_return;
+ else
+ tuples = PQntuples(result);
+
+ if (tuples > 0)
+ printTableAddFooter(&cont, _("Publications:"));
+
+ /* Might be an empty set - that's ok */
+ for (i = 0; i < tuples; i++)
+ {
+ printfPQExpBuffer(&buf, " \"%s\"",
+ PQgetvalue(result, i, 0));
+
+ printTableAddFooter(&cont, buf.data);
+ }
+ PQclear(result);
+ }
- if (footers[0])
- free(footers[0]);
+ printTable(&cont, pset.queryFout, false, pset.logfile);
retval = true;
goto error_return; /* not an error, just return early */
@@ -1972,6 +2037,11 @@ describeOneTableDetails(const char *schemaname,
for (i = 0; i < cols; i++)
printTableAddHeader(&cont, headers[i], true, 'l');
+ res = PSQLexec(buf.data);
+ if (!res)
+ goto error_return;
+ numrows = PQntuples(res);
+
/* Generate table cells to be printed */
for (i = 0; i < numrows; i++)
{
@@ -2898,7 +2968,7 @@ describeOneTableDetails(const char *schemaname,
"FROM pg_catalog.pg_publication p\n"
" JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
" JOIN pg_catalog.pg_class pc ON pc.relnamespace = pn.pnnspid\n"
- "WHERE pc.oid ='%s' and pg_catalog.pg_relation_is_publishable('%s')\n"
+ "WHERE pc.oid ='%s' and pn.pntype = 't' and pg_catalog.pg_relation_is_publishable('%s')\n"
"UNION\n"
"SELECT pubname\n"
" , pg_get_expr(pr.prqual, c.oid)\n"
@@ -4802,7 +4872,7 @@ listSchemas(const char *pattern, bool verbose, bool showSystem)
int i;
printfPQExpBuffer(&buf,
- "SELECT pubname \n"
+ "SELECT pubname, (CASE WHEN pntype = 't' THEN 'tables' ELSE 'sequences' END) AS pubtype\n"
"FROM pg_catalog.pg_publication p\n"
" JOIN pg_catalog.pg_publication_namespace pn ON p.oid = pn.pnpubid\n"
" JOIN pg_catalog.pg_namespace n ON n.oid = pn.pnnspid \n"
@@ -4831,8 +4901,9 @@ listSchemas(const char *pattern, bool verbose, bool showSystem)
/* Might be an empty set - that's ok */
for (i = 0; i < pub_schema_tuples; i++)
{
- printfPQExpBuffer(&buf, " \"%s\"",
- PQgetvalue(result, i, 0));
+ printfPQExpBuffer(&buf, " \"%s\" (%s)",
+ PQgetvalue(result, i, 0),
+ PQgetvalue(result, i, 1));
footers[i + 1] = pg_strdup(buf.data);
}
@@ -5837,7 +5908,7 @@ listPublications(const char *pattern)
PQExpBufferData buf;
PGresult *res;
printQueryOpt myopt = pset.popt;
- static const bool translate_columns[] = {false, false, false, false, false, false, false, false};
+ static const bool translate_columns[] = {false, false, false, false, false, false, false, false, false, false};
if (pset.sversion < 100000)
{
@@ -5851,23 +5922,45 @@ listPublications(const char *pattern)
initPQExpBuffer(&buf);
- printfPQExpBuffer(&buf,
- "SELECT pubname AS \"%s\",\n"
- " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
- " puballtables AS \"%s\",\n"
- " pubinsert AS \"%s\",\n"
- " pubupdate AS \"%s\",\n"
- " pubdelete AS \"%s\"",
- gettext_noop("Name"),
- gettext_noop("Owner"),
- gettext_noop("All tables"),
- gettext_noop("Inserts"),
- gettext_noop("Updates"),
- gettext_noop("Deletes"));
+ if (pset.sversion >= 150000)
+ printfPQExpBuffer(&buf,
+ "SELECT pubname AS \"%s\",\n"
+ " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+ " puballtables AS \"%s\",\n"
+ " puballsequences AS \"%s\",\n"
+ " pubinsert AS \"%s\",\n"
+ " pubupdate AS \"%s\",\n"
+ " pubdelete AS \"%s\"",
+ gettext_noop("Name"),
+ gettext_noop("Owner"),
+ gettext_noop("All tables"),
+ gettext_noop("All sequences"),
+ gettext_noop("Inserts"),
+ gettext_noop("Updates"),
+ gettext_noop("Deletes"));
+ else
+ printfPQExpBuffer(&buf,
+ "SELECT pubname AS \"%s\",\n"
+ " pg_catalog.pg_get_userbyid(pubowner) AS \"%s\",\n"
+ " puballtables AS \"%s\",\n"
+ " pubinsert AS \"%s\",\n"
+ " pubupdate AS \"%s\",\n"
+ " pubdelete AS \"%s\"",
+ gettext_noop("Name"),
+ gettext_noop("Owner"),
+ gettext_noop("All tables"),
+ gettext_noop("Inserts"),
+ gettext_noop("Updates"),
+ gettext_noop("Deletes"));
+
if (pset.sversion >= 110000)
appendPQExpBuffer(&buf,
",\n pubtruncate AS \"%s\"",
gettext_noop("Truncates"));
+ if (pset.sversion >= 150000)
+ appendPQExpBuffer(&buf,
+ ",\n pubsequence AS \"%s\"",
+ gettext_noop("Sequences"));
if (pset.sversion >= 130000)
appendPQExpBuffer(&buf,
",\n pubviaroot AS \"%s\"",
@@ -5957,6 +6050,7 @@ describePublications(const char *pattern)
PGresult *res;
bool has_pubtruncate;
bool has_pubviaroot;
+ bool has_pubsequence;
PQExpBufferData title;
printTableContent cont;
@@ -5973,6 +6067,7 @@ describePublications(const char *pattern)
has_pubtruncate = (pset.sversion >= 110000);
has_pubviaroot = (pset.sversion >= 130000);
+ has_pubsequence = (pset.sversion >= 150000);
initPQExpBuffer(&buf);
@@ -5980,12 +6075,17 @@ describePublications(const char *pattern)
"SELECT oid, pubname,\n"
" pg_catalog.pg_get_userbyid(pubowner) AS owner,\n"
" puballtables, pubinsert, pubupdate, pubdelete");
+
if (has_pubtruncate)
appendPQExpBufferStr(&buf,
", pubtruncate");
if (has_pubviaroot)
appendPQExpBufferStr(&buf,
", pubviaroot");
+ if (has_pubsequence)
+ appendPQExpBufferStr(&buf,
+ ", puballsequences, pubsequence");
+
appendPQExpBufferStr(&buf,
"\nFROM pg_catalog.pg_publication\n");
@@ -6026,6 +6126,7 @@ describePublications(const char *pattern)
char *pubid = PQgetvalue(res, i, 0);
char *pubname = PQgetvalue(res, i, 1);
bool puballtables = strcmp(PQgetvalue(res, i, 3), "t") == 0;
+ bool puballsequences = strcmp(PQgetvalue(res, i, 9), "t") == 0;
printTableOpt myopt = pset.popt.topt;
if (has_pubtruncate)
@@ -6033,29 +6134,43 @@ describePublications(const char *pattern)
if (has_pubviaroot)
ncols++;
+ /* sequences have two extra columns (puballsequences, pubsequences) */
+ if (has_pubsequence)
+ ncols += 2;
+
initPQExpBuffer(&title);
printfPQExpBuffer(&title, _("Publication %s"), pubname);
printTableInit(&cont, &myopt, title.data, ncols, nrows);
printTableAddHeader(&cont, gettext_noop("Owner"), true, align);
printTableAddHeader(&cont, gettext_noop("All tables"), true, align);
+ if (has_pubsequence)
+ printTableAddHeader(&cont, gettext_noop("All sequences"), true, align);
printTableAddHeader(&cont, gettext_noop("Inserts"), true, align);
printTableAddHeader(&cont, gettext_noop("Updates"), true, align);
printTableAddHeader(&cont, gettext_noop("Deletes"), true, align);
if (has_pubtruncate)
printTableAddHeader(&cont, gettext_noop("Truncates"), true, align);
+ if (has_pubsequence)
+ printTableAddHeader(&cont, gettext_noop("Sequences"), true, align);
if (has_pubviaroot)
printTableAddHeader(&cont, gettext_noop("Via root"), true, align);
- printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false);
- printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 2), false, false); /* owner */
+ printTableAddCell(&cont, PQgetvalue(res, i, 3), false, false); /* all tables */
+
+ if (has_pubsequence)
+ printTableAddCell(&cont, PQgetvalue(res, i, 9), false, false); /* all sequences */
+
+ printTableAddCell(&cont, PQgetvalue(res, i, 4), false, false); /* insert */
+ printTableAddCell(&cont, PQgetvalue(res, i, 5), false, false); /* update */
+ printTableAddCell(&cont, PQgetvalue(res, i, 6), false, false); /* delete */
if (has_pubtruncate)
- printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 7), false, false); /* truncate */
+ if (has_pubsequence)
+ printTableAddCell(&cont, PQgetvalue(res, i, 10), false, false); /* sequence */
if (has_pubviaroot)
- printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);
+ printTableAddCell(&cont, PQgetvalue(res, i, 8), false, false); /* via root */
if (!puballtables)
{
@@ -6086,6 +6201,7 @@ describePublications(const char *pattern)
"WHERE c.relnamespace = n.oid\n"
" AND c.oid = pr.prrelid\n"
" AND pr.prpubid = '%s'\n"
+ " AND c.relkind != 'S'\n" /* exclude sequences */
"ORDER BY 1,2", pubid);
if (!addFooterToPublicationDesc(&buf, "Tables:", false, &cont))
goto error_return;
@@ -6097,7 +6213,7 @@ describePublications(const char *pattern)
"SELECT n.nspname\n"
"FROM pg_catalog.pg_namespace n\n"
" JOIN pg_catalog.pg_publication_namespace pn ON n.oid = pn.pnnspid\n"
- "WHERE pn.pnpubid = '%s'\n"
+ "WHERE pn.pnpubid = '%s' AND pn.pntype = 't'\n"
"ORDER BY 1", pubid);
if (!addFooterToPublicationDesc(&buf, "Tables from schemas:",
true, &cont))
@@ -6105,6 +6221,37 @@ describePublications(const char *pattern)
}
}
+ if (!puballsequences)
+ {
+ /* Get the sequences for the specified publication */
+ printfPQExpBuffer(&buf,
+ "SELECT n.nspname, c.relname, NULL, NULL\n"
+ "FROM pg_catalog.pg_class c,\n"
+ " pg_catalog.pg_namespace n,\n"
+ " pg_catalog.pg_publication_rel pr\n"
+ "WHERE c.relnamespace = n.oid\n"
+ " AND c.oid = pr.prrelid\n"
+ " AND pr.prpubid = '%s'\n"
+ " AND c.relkind = 'S'\n" /* only sequences */
+ "ORDER BY 1,2", pubid);
+ if (!addFooterToPublicationDesc(&buf, "Sequences:", false, &cont))
+ goto error_return;
+
+ if (pset.sversion >= 150000)
+ {
+ /* Get the schemas for the specified publication */
+ printfPQExpBuffer(&buf,
+ "SELECT n.nspname\n"
+ "FROM pg_catalog.pg_namespace n\n"
+ " JOIN pg_catalog.pg_publication_namespace pn ON n.oid = pn.pnnspid\n"
+ "WHERE pn.pnpubid = '%s' AND pn.pntype = 's'\n"
+ "ORDER BY 1", pubid);
+ if (!addFooterToPublicationDesc(&buf, "Sequences from schemas:",
+ true, &cont))
+ goto error_return;
+ }
+ }
+
printTable(&cont, pset.queryFout, false, pset.logfile);
printTableCleanup(&cont);
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index a2df39d0c1b..025d3f71a11 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1827,11 +1827,15 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH("ADD", "DROP", "OWNER TO", "RENAME TO", "SET");
/* ALTER PUBLICATION <name> ADD */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") ||
(HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") &&
ends_with(prev_wd, ',')))
COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "SEQUENCE") ||
+ (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "SEQUENCE") &&
+ ends_with(prev_wd, ',')))
+ COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_sequences);
/*
* "ALTER PUBLICATION <name> SET TABLE <name> WHERE (" - complete with
* table attributes
@@ -1850,11 +1854,11 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH(",");
/* ALTER PUBLICATION <name> DROP */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
- COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+ COMPLETE_WITH("ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
/* ALTER PUBLICATION <name> SET */
else if (Matches("ALTER", "PUBLICATION", MatchAny, "SET"))
- COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "TABLE");
- else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES", "IN", "SCHEMA"))
+ COMPLETE_WITH("(", "ALL TABLES IN SCHEMA", "ALL SEQUENCES IN SCHEMA", "TABLE", "SEQUENCE");
+ else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|DROP|SET", "ALL", "TABLES|SEQUENCES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY_PLUS(Query_for_list_of_schemas
" AND nspname NOT LIKE E'pg\\\\_%%'",
"CURRENT_SCHEMA");
@@ -2984,21 +2988,27 @@ psql_completion(const char *text, int start, int end)
/* CREATE PUBLICATION */
else if (Matches("CREATE", "PUBLICATION", MatchAny))
- COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL TABLES IN SCHEMA", "WITH (");
+ COMPLETE_WITH("FOR TABLE", "FOR ALL TABLES", "FOR ALL TABLES IN SCHEMA",
+ "FOR SEQUENCE", "FOR ALL SEQUENCES", "FOR ALL SEQUENCES IN SCHEMA",
+ "WITH (");
else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR"))
- COMPLETE_WITH("TABLE", "ALL TABLES", "ALL TABLES IN SCHEMA");
+ COMPLETE_WITH("TABLE", "ALL TABLES", "ALL TABLES IN SCHEMA",
+ "SEQUENCE", "ALL SEQUENCES", "ALL SEQUENCES IN SCHEMA");
else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL"))
- COMPLETE_WITH("TABLES", "TABLES IN SCHEMA");
- else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES"))
+ COMPLETE_WITH("TABLES", "TABLES IN SCHEMA", "SEQUENCES", "SEQUENCES IN SCHEMA");
+ else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES"))
COMPLETE_WITH("IN SCHEMA", "WITH (");
- else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLE", MatchAny) && !ends_with(prev_wd, ','))
+ else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLE|SEQUENCE", MatchAny) && !ends_with(prev_wd, ','))
COMPLETE_WITH("WHERE (", "WITH (");
/* Complete "CREATE PUBLICATION <name> FOR TABLE" with "<table>, ..." */
else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "TABLE"))
COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);
+ /* Complete "CREATE PUBLICATION <name> FOR SEQUENCE" with "<sequence>, ..." */
+ else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "SEQUENCE"))
+ COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_sequences);
/*
- * "CREATE PUBLICATION <name> FOR TABLE <name> WHERE (" - complete with
+ * "CREATE PUBLICATION <name> FOR TABLE|SEQUENCE <name> WHERE (" - complete with
* table attributes
*/
else if (HeadMatches("CREATE", "PUBLICATION", MatchAny) && TailMatches("WHERE"))
@@ -3009,14 +3019,14 @@ psql_completion(const char *text, int start, int end)
COMPLETE_WITH(" WITH (");
/*
- * Complete "CREATE PUBLICATION <name> FOR ALL TABLES IN SCHEMA <schema>,
+ * Complete "CREATE PUBLICATION <name> FOR ALL TABLES|SEQUENCES IN SCHEMA <schema>,
* ..."
*/
- else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES", "IN", "SCHEMA"))
+ else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES", "IN", "SCHEMA"))
COMPLETE_WITH_QUERY_PLUS(Query_for_list_of_schemas
" AND nspname NOT LIKE E'pg\\\\_%%'",
"CURRENT_SCHEMA");
- else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES", "IN", "SCHEMA", MatchAny) && (!ends_with(prev_wd, ',')))
+ else if (Matches("CREATE", "PUBLICATION", MatchAny, "FOR", "ALL", "TABLES|SEQUENCES", "IN", "SCHEMA", MatchAny) && (!ends_with(prev_wd, ',')))
COMPLETE_WITH("WITH (");
/* Complete "CREATE PUBLICATION <name> [...] WITH" */
else if (HeadMatches("CREATE", "PUBLICATION") && TailMatches("WITH", "("))
diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h
index cec70caae1f..67f3d8526cd 100644
--- a/src/include/catalog/catversion.h
+++ b/src/include/catalog/catversion.h
@@ -53,6 +53,6 @@
*/
/* yyyymmddN */
-#define CATALOG_VERSION_NO 202204075
+#define CATALOG_VERSION_NO 202204074
#endif
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 52f56cf5b1b..61876c4e808 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11658,6 +11658,11 @@
provolatile => 's', prorettype => 'oid', proargtypes => 'text',
proallargtypes => '{text,oid}', proargmodes => '{i,o}',
proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },
+{ oid => '8000', descr => 'get OIDs of sequences in a publication',
+ proname => 'pg_get_publication_sequences', prorows => '1000', proretset => 't',
+ provolatile => 's', prorettype => 'oid', proargtypes => 'text',
+ proallargtypes => '{text,oid}', proargmodes => '{i,o}',
+ proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_sequences' },
{ oid => '6121',
descr => 'returns whether a relation can be part of a publication',
proname => 'pg_relation_is_publishable', provolatile => 's',
diff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h
index 29b18566657..186d8ea74b6 100644
--- a/src/include/catalog/pg_publication.h
+++ b/src/include/catalog/pg_publication.h
@@ -40,6 +40,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)
*/
bool puballtables;
+ /*
+ * indicates that this is special publication which should encompass all
+ * sequences in the database (except for the unlogged and temp ones)
+ */
+ bool puballsequences;
+
/* true if inserts are published */
bool pubinsert;
@@ -52,6 +58,9 @@ CATALOG(pg_publication,6104,PublicationRelationId)
/* true if truncates are published */
bool pubtruncate;
+ /* true if sequences are published */
+ bool pubsequence;
+
/* true if partition changes are published using root schema */
bool pubviaroot;
} FormData_pg_publication;
@@ -72,6 +81,7 @@ typedef struct PublicationActions
bool pubupdate;
bool pubdelete;
bool pubtruncate;
+ bool pubsequence;
} PublicationActions;
typedef struct PublicationDesc
@@ -99,6 +109,7 @@ typedef struct Publication
Oid oid;
char *name;
bool alltables;
+ bool allsequences;
bool pubviaroot;
PublicationActions pubactions;
} Publication;
@@ -130,14 +141,16 @@ typedef enum PublicationPartOpt
PUBLICATION_PART_ALL,
} PublicationPartOpt;
-extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);
+extern List *GetPublicationRelations(Oid pubid, char objectType,
+ PublicationPartOpt pub_partopt);
extern List *GetAllTablesPublications(void);
extern List *GetAllTablesPublicationRelations(bool pubviaroot);
-extern List *GetPublicationSchemas(Oid pubid);
-extern List *GetSchemaPublications(Oid schemaid);
-extern List *GetSchemaPublicationRelations(Oid schemaid,
+extern void GetActionsInPublication(Oid pubid, PublicationActions *actions);
+extern List *GetPublicationSchemas(Oid pubid, char objectType);
+extern List *GetSchemaPublications(Oid schemaid, char objectType);
+extern List *GetSchemaPublicationRelations(Oid schemaid, char objectType,
PublicationPartOpt pub_partopt);
-extern List *GetAllSchemaPublicationRelations(Oid puboid,
+extern List *GetAllSchemaPublicationRelations(Oid puboid, char objectType,
PublicationPartOpt pub_partopt);
extern List *GetPubPartitionOptionRelations(List *result,
PublicationPartOpt pub_partopt,
@@ -145,11 +158,15 @@ extern List *GetPubPartitionOptionRelations(List *result,
extern Oid GetTopMostAncestorInPublication(Oid puboid, List *ancestors,
int *ancestor_level);
+extern List *GetAllSequencesPublications(void);
+extern List *GetAllSequencesPublicationRelations(void);
+
extern bool is_publishable_relation(Relation rel);
extern bool is_schema_publication(Oid pubid);
extern ObjectAddress publication_add_relation(Oid pubid, PublicationRelInfo *pri,
bool if_not_exists);
extern ObjectAddress publication_add_schema(Oid pubid, Oid schemaid,
+ char objectType,
bool if_not_exists);
extern Bitmapset *pub_collist_to_bitmapset(Bitmapset *columns, Datum pubcols,
diff --git a/src/include/catalog/pg_publication_namespace.h b/src/include/catalog/pg_publication_namespace.h
index e4306da02e7..7340a1ec646 100644
--- a/src/include/catalog/pg_publication_namespace.h
+++ b/src/include/catalog/pg_publication_namespace.h
@@ -32,6 +32,7 @@ CATALOG(pg_publication_namespace,8901,PublicationNamespaceRelationId)
Oid oid; /* oid */
Oid pnpubid BKI_LOOKUP(pg_publication); /* Oid of the publication */
Oid pnnspid BKI_LOOKUP(pg_namespace); /* Oid of the schema */
+ char pntype; /* object type to include */
} FormData_pg_publication_namespace;
/* ----------------
@@ -42,6 +43,13 @@ CATALOG(pg_publication_namespace,8901,PublicationNamespaceRelationId)
typedef FormData_pg_publication_namespace *Form_pg_publication_namespace;
DECLARE_UNIQUE_INDEX_PKEY(pg_publication_namespace_oid_index, 8902, PublicationNamespaceObjectIndexId, on pg_publication_namespace using btree(oid oid_ops));
-DECLARE_UNIQUE_INDEX(pg_publication_namespace_pnnspid_pnpubid_index, 8903, PublicationNamespacePnnspidPnpubidIndexId, on pg_publication_namespace using btree(pnnspid oid_ops, pnpubid oid_ops));
+DECLARE_UNIQUE_INDEX(pg_publication_namespace_pnnspid_pnpubid_pntype_index, 8903, PublicationNamespacePnnspidPnpubidPntypeIndexId, on pg_publication_namespace using btree(pnnspid oid_ops, pnpubid oid_ops, pntype char_ops));
+
+/* object type to include from a schema, maps to relkind */
+#define PUB_OBJTYPE_TABLE 't' /* table (regular or partitioned) */
+#define PUB_OBJTYPE_SEQUENCE 's' /* sequence object */
+#define PUB_OBJTYPE_UNSUPPORTED 'u' /* used for non-replicated types */
+
+extern char pub_get_object_type_for_relkind(char relkind);
#endif /* PG_PUBLICATION_NAMESPACE_H */
diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h
index 9da23008101..57807874d76 100644
--- a/src/include/commands/sequence.h
+++ b/src/include/commands/sequence.h
@@ -60,8 +60,11 @@ extern ObjectAddress AlterSequence(ParseState *pstate, AlterSeqStmt *stmt);
extern void SequenceChangePersistence(Oid relid, char newrelpersistence);
extern void DeleteSequenceTuple(Oid relid);
extern void ResetSequence(Oid seq_relid);
+extern void SetSequence(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called);
extern void ResetSequenceCaches(void);
+extern void AtEOXact_Sequences(bool isCommit);
+
extern void seq_redo(XLogReaderState *rptr);
extern void seq_desc(StringInfo buf, XLogReaderState *rptr);
extern const char *seq_identify(uint8 info);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 1bdeb9eb3d3..8998d345605 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -4013,6 +4013,10 @@ typedef enum PublicationObjSpecType
PUBLICATIONOBJ_TABLES_IN_SCHEMA, /* All tables in schema */
PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA, /* All tables in first element of
* search_path */
+ PUBLICATIONOBJ_SEQUENCE, /* Sequence type */
+ PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA, /* Sequences in schema type */
+ PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA, /* Get the first element of
+ * search_path */
PUBLICATIONOBJ_CONTINUATION /* Continuation of previous type */
} PublicationObjSpecType;
@@ -4031,7 +4035,7 @@ typedef struct CreatePublicationStmt
char *pubname; /* Name of the publication */
List *options; /* List of DefElem nodes */
List *pubobjects; /* Optional list of publication objects */
- bool for_all_tables; /* Special publication for all tables in db */
+ List *for_all_objects; /* Special publication for all objects in db */
} CreatePublicationStmt;
typedef enum AlterPublicationAction
@@ -4054,7 +4058,7 @@ typedef struct AlterPublicationStmt
* objects.
*/
List *pubobjects; /* Optional list of publication objects */
- bool for_all_tables; /* Special publication for all tables in db */
+ List *for_all_objects; /* Special publication for all objects in db */
AlterPublicationAction action; /* What action to perform with the given
* objects */
} AlterPublicationStmt;
diff --git a/src/include/replication/decode.h b/src/include/replication/decode.h
index a33c2a718a7..8e07bb7409a 100644
--- a/src/include/replication/decode.h
+++ b/src/include/replication/decode.h
@@ -27,6 +27,7 @@ extern void heap2_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
extern void xact_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
extern void standby_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
extern void logicalmsg_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
+extern void sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);
extern void LogicalDecodingProcessRecord(LogicalDecodingContext *ctx,
XLogReaderState *record);
diff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h
index a771ab8ff33..a466db394b8 100644
--- a/src/include/replication/logicalproto.h
+++ b/src/include/replication/logicalproto.h
@@ -61,6 +61,7 @@ typedef enum LogicalRepMsgType
LOGICAL_REP_MSG_RELATION = 'R',
LOGICAL_REP_MSG_TYPE = 'Y',
LOGICAL_REP_MSG_MESSAGE = 'M',
+ LOGICAL_REP_MSG_SEQUENCE = 'Q',
LOGICAL_REP_MSG_BEGIN_PREPARE = 'b',
LOGICAL_REP_MSG_PREPARE = 'P',
LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',
@@ -118,6 +119,17 @@ typedef struct LogicalRepTyp
char *typname; /* name of the remote type */
} LogicalRepTyp;
+/* Sequence info */
+typedef struct LogicalRepSequence
+{
+ Oid remoteid; /* unique id of the remote sequence */
+ char *nspname; /* schema name of remote sequence */
+ char *seqname; /* name of the remote sequence */
+ int64 last_value;
+ int64 log_cnt;
+ bool is_called;
+} LogicalRepSequence;
+
/* Transaction info */
typedef struct LogicalRepBeginData
{
@@ -230,6 +242,11 @@ extern List *logicalrep_read_truncate(StringInfo in,
bool *cascade, bool *restart_seqs);
extern void logicalrep_write_message(StringInfo out, TransactionId xid, XLogRecPtr lsn,
bool transactional, const char *prefix, Size sz, const char *message);
+extern void logicalrep_write_sequence(StringInfo out, Relation rel,
+ TransactionId xid, XLogRecPtr lsn,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+extern void logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata);
extern void logicalrep_write_rel(StringInfo out, TransactionId xid,
Relation rel, Bitmapset *columns);
extern LogicalRepRelation *logicalrep_read_rel(StringInfo in);
diff --git a/src/include/replication/message.h b/src/include/replication/message.h
index 7d7785292f1..a084acc06ab 100644
--- a/src/include/replication/message.h
+++ b/src/include/replication/message.h
@@ -27,13 +27,28 @@ typedef struct xl_logical_message
char message[FLEXIBLE_ARRAY_MEMBER];
} xl_logical_message;
+/*
+ * Generic logical decoding sequence wal record.
+ */
+typedef struct xl_logical_sequence
+{
+ RelFileNode node;
+ Oid reloid;
+ int64 last; /* last value emitted for sequence */
+ int64 log_cnt; /* last value cached for sequence */
+ bool is_called;
+} xl_logical_sequence;
+
#define SizeOfLogicalMessage (offsetof(xl_logical_message, message))
+#define SizeOfLogicalSequence (sizeof(xl_logical_sequence))
extern XLogRecPtr LogLogicalMessage(const char *prefix, const char *message,
size_t size, bool transactional);
/* RMGR API*/
#define XLOG_LOGICAL_MESSAGE 0x00
+#define XLOG_LOGICAL_SEQUENCE 0x10
+
void logicalmsg_redo(XLogReaderState *record);
void logicalmsg_desc(StringInfo buf, XLogReaderState *record);
const char *logicalmsg_identify(uint8 info);
diff --git a/src/include/replication/output_plugin.h b/src/include/replication/output_plugin.h
index 539dc8e6974..abc05644516 100644
--- a/src/include/replication/output_plugin.h
+++ b/src/include/replication/output_plugin.h
@@ -88,6 +88,17 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
Size message_size,
const char *message);
+/*
+ * Called for the generic logical decoding sequences.
+ */
+typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Filter changes by origin.
*/
@@ -199,6 +210,18 @@ typedef void (*LogicalDecodeStreamMessageCB) (struct LogicalDecodingContext *ctx
Size message_size,
const char *message);
+/*
+ * Called for the streaming generic logical decoding sequences from in-progress
+ * transactions.
+ */
+typedef void (*LogicalDecodeStreamSequenceCB) (struct LogicalDecodingContext *ctx,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ int64 last_value,
+ int64 log_cnt,
+ bool is_called);
+
/*
* Callback for streaming truncates from in-progress transactions.
*/
@@ -219,6 +242,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeTruncateCB truncate_cb;
LogicalDecodeCommitCB commit_cb;
LogicalDecodeMessageCB message_cb;
+ LogicalDecodeSequenceCB sequence_cb;
LogicalDecodeFilterByOriginCB filter_by_origin_cb;
LogicalDecodeShutdownCB shutdown_cb;
@@ -237,6 +261,7 @@ typedef struct OutputPluginCallbacks
LogicalDecodeStreamCommitCB stream_commit_cb;
LogicalDecodeStreamChangeCB stream_change_cb;
LogicalDecodeStreamMessageCB stream_message_cb;
+ LogicalDecodeStreamSequenceCB stream_sequence_cb;
LogicalDecodeStreamTruncateCB stream_truncate_cb;
} OutputPluginCallbacks;
diff --git a/src/include/replication/pgoutput.h b/src/include/replication/pgoutput.h
index eafedd610a5..f4e9f35d09d 100644
--- a/src/include/replication/pgoutput.h
+++ b/src/include/replication/pgoutput.h
@@ -29,6 +29,7 @@ typedef struct PGOutputData
bool streaming;
bool messages;
bool two_phase;
+ bool sequences;
} PGOutputData;
#endif /* PGOUTPUT_H */
diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h
index f12e75d69be..260630c2ba5 100644
--- a/src/include/replication/reorderbuffer.h
+++ b/src/include/replication/reorderbuffer.h
@@ -64,7 +64,8 @@ typedef enum ReorderBufferChangeType
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,
REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT,
- REORDER_BUFFER_CHANGE_TRUNCATE
+ REORDER_BUFFER_CHANGE_TRUNCATE,
+ REORDER_BUFFER_CHANGE_SEQUENCE
} ReorderBufferChangeType;
/* forward declaration */
@@ -158,6 +159,16 @@ typedef struct ReorderBufferChange
uint32 ninvalidations; /* Number of messages */
SharedInvalidationMessage *invalidations; /* invalidation message */
} inval;
+
+ /* Context data for Sequence changes */
+ struct
+ {
+ RelFileNode relnode;
+ Oid reloid;
+ int64 last;
+ int64 log_cnt;
+ bool is_called;
+ } sequence;
} data;
/*
@@ -430,6 +441,14 @@ typedef void (*ReorderBufferMessageCB) (ReorderBuffer *rb,
const char *prefix, Size sz,
const char *message);
+/* sequence callback signature */
+typedef void (*ReorderBufferSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* begin prepare callback signature */
typedef void (*ReorderBufferBeginPrepareCB) (ReorderBuffer *rb,
ReorderBufferTXN *txn);
@@ -496,6 +515,14 @@ typedef void (*ReorderBufferStreamMessageCB) (
const char *prefix, Size sz,
const char *message);
+/* stream sequence callback signature */
+typedef void (*ReorderBufferStreamSequenceCB) (ReorderBuffer *rb,
+ ReorderBufferTXN *txn,
+ XLogRecPtr sequence_lsn,
+ Relation rel,
+ int64 last_value, int64 log_cnt,
+ bool is_called);
+
/* stream truncate callback signature */
typedef void (*ReorderBufferStreamTruncateCB) (
ReorderBuffer *rb,
@@ -511,6 +538,12 @@ struct ReorderBuffer
*/
HTAB *by_txn;
+ /*
+ * relfilenode => XID lookup table for sequences created in a transaction
+ * (also includes altered sequences, which assigns new relfilenode)
+ */
+ HTAB *sequences;
+
/*
* Transactions that could be a toplevel xact, ordered by LSN of the first
* record bearing that xid.
@@ -541,6 +574,7 @@ struct ReorderBuffer
ReorderBufferApplyTruncateCB apply_truncate;
ReorderBufferCommitCB commit;
ReorderBufferMessageCB message;
+ ReorderBufferSequenceCB sequence;
/*
* Callbacks to be called when streaming a transaction at prepare time.
@@ -560,6 +594,7 @@ struct ReorderBuffer
ReorderBufferStreamCommitCB stream_commit;
ReorderBufferStreamChangeCB stream_change;
ReorderBufferStreamMessageCB stream_message;
+ ReorderBufferStreamSequenceCB stream_sequence;
ReorderBufferStreamTruncateCB stream_truncate;
/*
@@ -635,6 +670,10 @@ void ReorderBufferQueueChange(ReorderBuffer *, TransactionId,
void ReorderBufferQueueMessage(ReorderBuffer *, TransactionId, Snapshot snapshot, XLogRecPtr lsn,
bool transactional, const char *prefix,
Size message_size, const char *message);
+void ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,
+ XLogRecPtr lsn, RepOriginId origin_id,
+ Oid reloid, RelFileNode rnode,
+ int64 last, int64 log_cnt, bool is_called);
void ReorderBufferCommit(ReorderBuffer *, TransactionId,
XLogRecPtr commit_lsn, XLogRecPtr end_lsn,
TimestampTz commit_time, RepOriginId origin_id, XLogRecPtr origin_lsn);
diff --git a/src/test/regress/expected/object_address.out b/src/test/regress/expected/object_address.out
index 4117fc27c9a..c95d44b3db9 100644
--- a/src/test/regress/expected/object_address.out
+++ b/src/test/regress/expected/object_address.out
@@ -46,6 +46,7 @@ CREATE TRANSFORM FOR int LANGUAGE SQL (
SET client_min_messages = 'ERROR';
CREATE PUBLICATION addr_pub FOR TABLE addr_nsp.gentable;
CREATE PUBLICATION addr_pub_schema FOR ALL TABLES IN SCHEMA addr_nsp;
+CREATE PUBLICATION addr_pub_schema2 FOR ALL SEQUENCES IN SCHEMA addr_nsp;
RESET client_min_messages;
CREATE SUBSCRIPTION regress_addr_sub CONNECTION '' PUBLICATION bar WITH (connect = false, slot_name = NONE);
WARNING: tables were not subscribed, you will have to run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to subscribe the tables
@@ -428,7 +429,8 @@ WITH objects (type, name, args) AS (VALUES
('transform', '{int}', '{sql}'),
('access method', '{btree}', '{}'),
('publication', '{addr_pub}', '{}'),
- ('publication namespace', '{addr_nsp}', '{addr_pub_schema}'),
+ ('publication namespace', '{addr_nsp}', '{addr_pub_schema,t}'),
+ ('publication namespace', '{addr_nsp}', '{addr_pub_schema2,s}'),
('publication relation', '{addr_nsp, gentable}', '{addr_pub}'),
('subscription', '{regress_addr_sub}', '{}'),
('statistics object', '{addr_nsp, gentable_stat}', '{}')
@@ -492,8 +494,9 @@ SELECT (pg_identify_object(addr1.classid, addr1.objid, addr1.objsubid)).*,
subscription | | regress_addr_sub | regress_addr_sub | t
publication | | addr_pub | addr_pub | t
publication relation | | | addr_nsp.gentable in publication addr_pub | t
- publication namespace | | | addr_nsp in publication addr_pub_schema | t
-(50 rows)
+ publication namespace | | | addr_nsp in publication addr_pub_schema type t | t
+ publication namespace | | | addr_nsp in publication addr_pub_schema2 type s | t
+(51 rows)
---
--- Cleanup resources
@@ -506,6 +509,7 @@ drop cascades to server integer
drop cascades to user mapping for regress_addr_user on server integer
DROP PUBLICATION addr_pub;
DROP PUBLICATION addr_pub_schema;
+DROP PUBLICATION addr_pub_schema2;
DROP SUBSCRIPTION regress_addr_sub;
DROP SCHEMA addr_nsp CASCADE;
NOTICE: drop cascades to 14 other objects
diff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out
index 4d24d772bde..0308e40ba6c 100644
--- a/src/test/regress/expected/publication.out
+++ b/src/test/regress/expected/publication.out
@@ -30,20 +30,20 @@ ERROR: conflicting or redundant options
LINE 1: ...ub_xxx WITH (publish_via_partition_root = 'true', publish_vi...
^
\dRp
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f | t | f | f | f | f
- testpub_default | regress_publication_user | f | f | t | f | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f | f | t | f | f | f | f | f
+ testpub_default | regress_publication_user | f | f | f | t | f | f | f | f
(2 rows)
-ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
+ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete, sequence');
\dRp
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpib_ins_trunct | regress_publication_user | f | t | f | f | f | f
- testpub_default | regress_publication_user | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpib_ins_trunct | regress_publication_user | f | f | t | f | f | f | f | f
+ testpub_default | regress_publication_user | f | f | t | t | t | f | t | f
(2 rows)
--- adding tables
@@ -61,6 +61,9 @@ CREATE TABLE testpub_tbl2 (id serial primary key, data text);
ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
ERROR: publication "testpub_foralltables" is defined as FOR ALL TABLES
DETAIL: Tables cannot be added to or dropped from FOR ALL TABLES publications.
+-- fail - can't add a table using ADD SEQUENCE command
+ALTER PUBLICATION testpub_foralltables ADD SEQUENCE testpub_tbl2;
+ERROR: object type does not match type expected by command
-- fail - can't drop from all tables publication
ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
ERROR: publication "testpub_foralltables" is defined as FOR ALL TABLES
@@ -87,10 +90,10 @@ RESET client_min_messages;
-- should be able to add schema to 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable ADD ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl1"
Tables from schemas:
@@ -99,20 +102,20 @@ Tables from schemas:
-- should be able to drop schema from 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable DROP ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl1"
-- should be able to set schema to 'FOR TABLE' publication
ALTER PUBLICATION testpub_fortable SET ALL TABLES IN SCHEMA pub_test;
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test"
@@ -134,10 +137,10 @@ ERROR: relation "testpub_nopk" is not part of the publication
-- should be able to set table to schema publication
ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;
\dRp+ testpub_forschema
- Publication testpub_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test.testpub_nopk"
@@ -159,10 +162,10 @@ Publications:
"testpub_foralltables"
\dRp+ testpub_foralltables
- Publication testpub_foralltables
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | t | t | t | f | f | f
+ Publication testpub_foralltables
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | t | f | t | t | f | f | f | f
(1 row)
DROP TABLE testpub_tbl2;
@@ -174,24 +177,545 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;
CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;
RESET client_min_messages;
\dRp+ testpub3
- Publication testpub3
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl3"
"public.testpub_tbl3a"
\dRp+ testpub4
- Publication testpub4
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub4
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl3"
DROP TABLE testpub_tbl3, testpub_tbl3a;
DROP PUBLICATION testpub3, testpub4;
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES WITH (publish = 'sequence');
+RESET client_min_messages;
+ALTER PUBLICATION testpub_forallsequences SET (publish = 'insert, sequence');
+CREATE SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences ADD SEQUENCE testpub_seq2;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't drop from all sequences publication
+ALTER PUBLICATION testpub_forallsequences DROP SEQUENCE testpub_seq2;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences SET SEQUENCE pub_test.testpub_seq1;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences cannot be added to or dropped from FOR ALL SEQUENCES publications.
+-- fail - can't add schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences ADD ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+-- fail - can't drop schema from 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences DROP ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+-- fail - can't set schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences SET ALL SEQUENCES IN SCHEMA pub_test;
+ERROR: publication "testpub_forallsequences" is defined as FOR ALL SEQUENCES
+DETAIL: Sequences from schema cannot be added to, dropped from, or set on FOR ALL SEQUENCES publications.
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forsequence FOR SEQUENCE testpub_seq0;
+RESET client_min_messages;
+-- should be able to add schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence ADD ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "public.testpub_seq0"
+Sequences from schemas:
+ "pub_test"
+
+-- fail - can't add sequence from the schema we already added
+ALTER PUBLICATION testpub_forsequence ADD SEQUENCE pub_test.testpub_seq1;
+ERROR: cannot add relation "pub_test.testpub_seq1" to publication
+DETAIL: Sequence's schema "pub_test" is already part of the publication or part of the specified schema list.
+-- fail - can't add sequence using ADD TABLE command
+ALTER PUBLICATION testpub_forsequence ADD TABLE pub_test.testpub_seq1;
+ERROR: object type does not match type expected by command
+-- should be able to drop schema from 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "public.testpub_seq0"
+
+-- should be able to set schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence SET ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+ Publication testpub_forsequence
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences from schemas:
+ "pub_test"
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forschema FOR ALL SEQUENCES IN SCHEMA pub_test;
+RESET client_min_messages;
+-- fail - can't create publication with schema and sequence of the same schema
+CREATE PUBLICATION testpub_for_seq_schema FOR ALL SEQUENCES IN SCHEMA pub_test, SEQUENCE pub_test.testpub_seq1;
+ERROR: cannot add relation "pub_test.testpub_seq1" to publication
+DETAIL: Sequence's schema "pub_test" is already part of the publication or part of the specified schema list.
+-- fail - can't add a sequence of the same schema to the schema publication
+ALTER PUBLICATION testpub_forschema ADD SEQUENCE pub_test.testpub_seq1;
+ERROR: cannot add relation "pub_test.testpub_seq1" to publication
+DETAIL: Sequence's schema "pub_test" is already part of the publication or part of the specified schema list.
+-- fail - can't drop a sequence from the schema publication which isn't in the
+-- publication
+ALTER PUBLICATION testpub_forschema DROP SEQUENCE pub_test.testpub_seq1;
+ERROR: relation "testpub_seq1" is not part of the publication
+-- should be able to set sequence to schema publication
+ALTER PUBLICATION testpub_forschema SET SEQUENCE pub_test.testpub_seq1;
+\dRp+ testpub_forschema
+ Publication testpub_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "pub_test.testpub_seq1"
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+ pubname | puballtables | puballsequences
+-------------------------+--------------+-----------------
+ testpub_forallsequences | f | t
+(1 row)
+
+\d+ pub_test.testpub_seq1
+ Sequence "pub_test.testpub_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_forallsequences"
+ "testpub_forschema"
+ "testpub_forsequence"
+
+\dRp+ testpub_forallsequences
+ Publication testpub_forallsequences
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | t | t | f | f | f | t | f
+(1 row)
+
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1, testpub_seq2;
+DROP PUBLICATION testpub_forallsequences, testpub_forsequence, testpub_forschema;
+-- publication testing multiple sequences at the same time
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE testpub_seq2;
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_multi FOR SEQUENCE testpub_seq1, testpub_seq2;
+RESET client_min_messages;
+\dRp+ testpub_multi
+ Publication testpub_multi
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Sequences:
+ "public.testpub_seq1"
+ "public.testpub_seq2"
+
+DROP PUBLICATION testpub_multi;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE testpub_seq2;
+-- Publication mixing tables and sequences
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_mix;
+RESET client_min_messages;
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE pub_test.testpub_seq2;
+ALTER PUBLICATION testpub_mix ADD SEQUENCE testpub_seq1, TABLE testpub_tbl1;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Sequences:
+ "public.testpub_seq1"
+
+ALTER PUBLICATION testpub_mix ADD ALL SEQUENCES IN SCHEMA pub_test, ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Tables from schemas:
+ "pub_test"
+Sequences:
+ "public.testpub_seq1"
+Sequences from schemas:
+ "pub_test"
+
+ALTER PUBLICATION testpub_mix DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Tables from schemas:
+ "pub_test"
+Sequences:
+ "public.testpub_seq1"
+
+ALTER PUBLICATION testpub_mix DROP ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+ Publication testpub_mix
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "public.testpub_tbl1"
+Sequences:
+ "public.testpub_seq1"
+
+DROP PUBLICATION testpub_mix;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE pub_test.testpub_seq2;
+-- make sure we replicate only the correct relation type
+CREATE SCHEMA pub_test1;
+CREATE SEQUENCE pub_test1.test_seq1;
+CREATE TABLE pub_test1.test_tbl1 (a int primary key, b int);
+CREATE SCHEMA pub_test2;
+CREATE SEQUENCE pub_test2.test_seq2;
+CREATE TABLE pub_test2.test_tbl2 (a int primary key, b int);
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_schemas;
+RESET client_min_messages;
+-- add tables from one schema, sequences from the other
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test2"
+Sequences from schemas:
+ "pub_test1"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- add the other object type from each schema
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test1;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test2;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+ "pub_test2"
+Sequences from schemas:
+ "pub_test1"
+ "pub_test2"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+ "testpub_schemas" (tables)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+ "testpub_schemas" (sequences)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- now drop the object type added first
+ALTER PUBLICATION testpub_schemas DROP ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas DROP ALL SEQUENCES IN SCHEMA pub_test1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+Sequences from schemas:
+ "pub_test2"
+
+\dn+ pub_test1
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test1 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (tables)
+
+\dn+ pub_test2
+ List of schemas
+ Name | Owner | Access privileges | Description
+-----------+--------------------------+-------------------+-------------
+ pub_test2 | regress_publication_user | |
+Publications:
+ "testpub_schemas" (sequences)
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+
+-- should fail (publication contains the whole schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test1.test_tbl1;
+ERROR: cannot add relation "pub_test1.test_tbl1" to publication
+DETAIL: Table's schema "pub_test1" is already part of the publication or part of the specified schema list.
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test2.test_seq2;
+ERROR: cannot add relation "pub_test2.test_seq2" to publication
+DETAIL: Sequence's schema "pub_test2" is already part of the publication or part of the specified schema list.
+-- should work (different schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test1.test_seq1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables:
+ "pub_test2.test_tbl2"
+Tables from schemas:
+ "pub_test1"
+Sequences:
+ "pub_test1.test_seq1"
+Sequences from schemas:
+ "pub_test2"
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+-- now drop the explicitly added objects again
+ALTER PUBLICATION testpub_schemas DROP TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas DROP SEQUENCE pub_test1.test_seq1;
+\dRp+ testpub_schemas
+ Publication testpub_schemas
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
+Tables from schemas:
+ "pub_test1"
+Sequences from schemas:
+ "pub_test2"
+
+\d+ pub_test1.test_seq1;
+ Sequence "pub_test1.test_seq1"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+
+\d+ pub_test1.test_tbl1;
+ Table "pub_test1.test_tbl1"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl1_pkey" PRIMARY KEY, btree (a)
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_seq2;
+ Sequence "pub_test2.test_seq2"
+ Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
+--------+-------+---------+---------------------+-----------+---------+-------
+ bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
+Publications:
+ "testpub_schemas"
+
+\d+ pub_test2.test_tbl2;
+ Table "pub_test2.test_tbl2"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
+--------+---------+-----------+----------+---------+---------+--------------+-------------
+ a | integer | | not null | | plain | |
+ b | integer | | | | plain | |
+Indexes:
+ "test_tbl2_pkey" PRIMARY KEY, btree (a)
+
+DROP PUBLICATION testpub_schemas;
+DROP TABLE pub_test1.test_tbl1, pub_test2.test_tbl2;
+DROP SEQUENCE pub_test1.test_seq1, pub_test2.test_seq2;
+DROP SCHEMA pub_test1, pub_test2;
-- Tests for partitioned tables
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_forparted;
@@ -207,10 +731,10 @@ UPDATE testpub_parted1 SET a = 1;
-- only parent is listed as being in publication, not the partition
ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;
\dRp+ testpub_forparted
- Publication testpub_forparted
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forparted
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_parted"
@@ -223,10 +747,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;
UPDATE testpub_parted1 SET a = 1;
ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);
\dRp+ testpub_forparted
- Publication testpub_forparted
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | t
+ Publication testpub_forparted
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | t
Tables:
"public.testpub_parted"
@@ -255,10 +779,10 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -271,10 +795,10 @@ Tables:
ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl2" WHERE ((c <> 'test'::text) AND (d < 5))
@@ -290,10 +814,10 @@ Publications:
ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl3" WHERE ((e > 1000) AND (e < 2000))
@@ -301,10 +825,10 @@ Tables:
-- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)
ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);
\dRp+ testpub5
- Publication testpub5
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub5
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl3" WHERE ((e > 300) AND (e < 500))
@@ -337,10 +861,10 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub_syntax1
- Publication testpub_syntax1
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub_syntax1
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"public.testpub_rf_tbl3" WHERE (e < 999)
@@ -350,10 +874,10 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');
RESET client_min_messages;
\dRp+ testpub_syntax2
- Publication testpub_syntax2
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | f | f
+ Publication testpub_syntax2
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | f | f | f
Tables:
"public.testpub_rf_tbl1"
"testpub_rf_schema1.testpub_rf_tbl5" WHERE (h < 999)
@@ -672,10 +1196,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');
RESET client_min_messages;
ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a); -- ok
\dRp+ testpub_table_ins
- Publication testpub_table_ins
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | f | f | t | f
+ Publication testpub_table_ins
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | f | f | t | f | f
Tables:
"public.testpub_tbl5" (a)
@@ -817,10 +1341,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));
ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;
ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);
\dRp+ testpub_both_filters
- Publication testpub_both_filters
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_both_filters
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"public.testpub_tbl_both_filters" (a, c) WHERE (c <> 1)
@@ -1021,10 +1545,10 @@ ERROR: relation "testpub_tbl1" is already member of publication "testpub_fortbl
CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;
ERROR: publication "testpub_fortbl" already exists
\dRp+ testpub_fortbl
- Publication testpub_fortbl
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortbl
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test.testpub_nopk"
"public.testpub_tbl1"
@@ -1062,10 +1586,10 @@ Publications:
"testpub_fortbl"
\dRp+ testpub_default
- Publication testpub_default
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | f | f
+ Publication testpub_default
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | f | t | f
Tables:
"pub_test.testpub_nopk"
"public.testpub_tbl1"
@@ -1143,10 +1667,10 @@ REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;
DROP TABLE testpub_parted;
DROP TABLE testpub_tbl1;
\dRp+ testpub_default
- Publication testpub_default
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | f | f
+ Publication testpub_default
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | f | t | f
(1 row)
-- fail - must be owner of publication
@@ -1156,20 +1680,20 @@ ERROR: must be owner of publication testpub_default
RESET ROLE;
ALTER PUBLICATION testpub_default RENAME TO testpub_foo;
\dRp testpub_foo
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
--------------+--------------------------+------------+---------+---------+---------+-----------+----------
- testpub_foo | regress_publication_user | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+-------------+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpub_foo | regress_publication_user | f | f | t | t | t | f | t | f
(1 row)
-- rename back to keep the rest simple
ALTER PUBLICATION testpub_foo RENAME TO testpub_default;
ALTER PUBLICATION testpub_default OWNER TO regress_publication_user2;
\dRp testpub_default
- List of publications
- Name | Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
------------------+---------------------------+------------+---------+---------+---------+-----------+----------
- testpub_default | regress_publication_user2 | f | t | t | t | f | f
+ List of publications
+ Name | Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+-----------------+---------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ testpub_default | regress_publication_user2 | f | f | t | t | t | f | t | f
(1 row)
-- adding schemas and tables
@@ -1185,19 +1709,19 @@ CREATE TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA"(id int);
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub1_forschema FOR ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
CREATE PUBLICATION testpub2_forschema FOR ALL TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1211,44 +1735,44 @@ CREATE PUBLICATION testpub6_forschema FOR ALL TABLES IN SCHEMA "CURRENT_SCHEMA",
CREATE PUBLICATION testpub_fortable FOR TABLE "CURRENT_SCHEMA"."CURRENT_SCHEMA";
RESET client_min_messages;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"public"
\dRp+ testpub4_forschema
- Publication testpub4_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub4_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
\dRp+ testpub5_forschema
- Publication testpub5_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub5_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
"public"
\dRp+ testpub6_forschema
- Publication testpub6_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub6_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"CURRENT_SCHEMA"
"public"
\dRp+ testpub_fortable
- Publication testpub_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"CURRENT_SCHEMA.CURRENT_SCHEMA"
@@ -1282,10 +1806,10 @@ ERROR: schema "testpub_view" does not exist
-- dropping the schema should reflect the change in publication
DROP SCHEMA pub_test3;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1293,20 +1817,20 @@ Tables from schemas:
-- renaming the schema should reflect the change in publication
ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1_renamed"
"pub_test2"
ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;
\dRp+ testpub2_forschema
- Publication testpub2_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub2_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1314,10 +1838,10 @@ Tables from schemas:
-- alter publication add schema
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1326,10 +1850,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1338,10 +1862,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema ADD ALL TABLES IN SCHEMA pub_test1;
ERROR: schema "pub_test1" is already member of publication "testpub1_forschema"
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1349,10 +1873,10 @@ Tables from schemas:
-- alter publication drop schema
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1360,10 +1884,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test2;
ERROR: tables from schema "pub_test2" are not part of the publication
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1371,29 +1895,29 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
-- drop all schemas
ALTER PUBLICATION testpub1_forschema DROP ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
(1 row)
-- alter publication set multiple schema
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA pub_test1, pub_test2;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1402,10 +1926,10 @@ Tables from schemas:
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA non_existent_schema;
ERROR: schema "non_existent_schema" does not exist
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
"pub_test2"
@@ -1414,10 +1938,10 @@ Tables from schemas:
-- removing the duplicate schemas
ALTER PUBLICATION testpub1_forschema SET ALL TABLES IN SCHEMA pub_test1, pub_test1;
\dRp+ testpub1_forschema
- Publication testpub1_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub1_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1496,18 +2020,18 @@ SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub3_forschema;
RESET client_min_messages;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
(1 row)
ALTER PUBLICATION testpub3_forschema SET ALL TABLES IN SCHEMA pub_test1;
\dRp+ testpub3_forschema
- Publication testpub3_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub3_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables from schemas:
"pub_test1"
@@ -1517,20 +2041,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR ALL TABLES IN SCHEMA pub_test1
CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, ALL TABLES IN SCHEMA pub_test1;
RESET client_min_messages;
\dRp+ testpub_forschema_fortable
- Publication testpub_forschema_fortable
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_forschema_fortable
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test2.tbl1"
Tables from schemas:
"pub_test1"
\dRp+ testpub_fortable_forschema
- Publication testpub_fortable_forschema
- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root
---------------------------+------------+---------+---------+---------+-----------+----------
- regress_publication_user | f | t | t | t | t | f
+ Publication testpub_fortable_forschema
+ Owner | All tables | All sequences | Inserts | Updates | Deletes | Truncates | Sequences | Via root
+--------------------------+------------+---------------+---------+---------+---------+-----------+-----------+----------
+ regress_publication_user | f | f | t | t | t | t | t | f
Tables:
"pub_test2.tbl1"
Tables from schemas:
@@ -1574,40 +2098,85 @@ CREATE SCHEMA sch1;
CREATE SCHEMA sch2;
CREATE TABLE sch1.tbl1 (a int) PARTITION BY RANGE(a);
CREATE TABLE sch2.tbl1_part1 PARTITION OF sch1.tbl1 FOR VALUES FROM (1) to (10);
+CREATE SEQUENCE sch1.seq1;
+CREATE SEQUENCE sch2.seq2;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+-----------
pub | sch1 | tbl1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
DROP PUBLICATION pub;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+(1 row)
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
@@ -1617,14 +2186,26 @@ SELECT * FROM pg_publication_tables;
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+(0 rows)
+
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
pubname | schemaname | tablename
---------+------------+------------
pub | sch2 | tbl1_part1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
DROP PUBLICATION pub;
DROP TABLE sch2.tbl1_part1;
DROP TABLE sch1.tbl1;
@@ -1640,9 +2221,81 @@ SELECT * FROM pg_publication_tables;
pub | sch1 | tbl1
(1 row)
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+(0 rows)
+
+DROP PUBLICATION pub;
+-- Schema publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR ALL SEQUENCES IN SCHEMA sch2;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
+ALTER PUBLICATION pub DROP SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch2 | seq2
+(1 row)
+
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch1;
+SELECT * FROM pg_publication_tables;
+ pubname | schemaname | tablename
+---------+------------+-----------
+(0 rows)
+
+SELECT * FROM pg_publication_sequences;
+ pubname | schemaname | sequencename
+---------+------------+--------------
+ pub | sch1 | seq1
+ pub | sch2 | seq2
+(2 rows)
+
RESET client_min_messages;
DROP PUBLICATION pub;
DROP TABLE sch1.tbl1;
+DROP SEQUENCE sch1.seq1, sch2.seq2;
DROP SCHEMA sch1 cascade;
DROP SCHEMA sch2 cascade;
RESET SESSION AUTHORIZATION;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index f29375c2a90..db652ea8d89 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1435,6 +1435,14 @@ pg_prepared_xacts| SELECT p.transaction,
FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid)
LEFT JOIN pg_authid u ON ((p.ownerid = u.oid)))
LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
+pg_publication_sequences| SELECT p.pubname,
+ n.nspname AS schemaname,
+ c.relname AS sequencename
+ FROM pg_publication p,
+ LATERAL pg_get_publication_sequences((p.pubname)::text) gps(relid),
+ (pg_class c
+ JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
+ WHERE (c.oid = gps.relid);
pg_publication_tables| SELECT p.pubname,
n.nspname AS schemaname,
c.relname AS tablename
diff --git a/src/test/regress/sql/object_address.sql b/src/test/regress/sql/object_address.sql
index acd0468a9d9..9d8323468d6 100644
--- a/src/test/regress/sql/object_address.sql
+++ b/src/test/regress/sql/object_address.sql
@@ -49,6 +49,7 @@ CREATE TRANSFORM FOR int LANGUAGE SQL (
SET client_min_messages = 'ERROR';
CREATE PUBLICATION addr_pub FOR TABLE addr_nsp.gentable;
CREATE PUBLICATION addr_pub_schema FOR ALL TABLES IN SCHEMA addr_nsp;
+CREATE PUBLICATION addr_pub_schema2 FOR ALL SEQUENCES IN SCHEMA addr_nsp;
RESET client_min_messages;
CREATE SUBSCRIPTION regress_addr_sub CONNECTION '' PUBLICATION bar WITH (connect = false, slot_name = NONE);
CREATE STATISTICS addr_nsp.gentable_stat ON a, b FROM addr_nsp.gentable;
@@ -198,7 +199,8 @@ WITH objects (type, name, args) AS (VALUES
('transform', '{int}', '{sql}'),
('access method', '{btree}', '{}'),
('publication', '{addr_pub}', '{}'),
- ('publication namespace', '{addr_nsp}', '{addr_pub_schema}'),
+ ('publication namespace', '{addr_nsp}', '{addr_pub_schema,t}'),
+ ('publication namespace', '{addr_nsp}', '{addr_pub_schema2,s}'),
('publication relation', '{addr_nsp, gentable}', '{addr_pub}'),
('subscription', '{regress_addr_sub}', '{}'),
('statistics object', '{addr_nsp, gentable_stat}', '{}')
@@ -218,6 +220,7 @@ SELECT (pg_identify_object(addr1.classid, addr1.objid, addr1.objsubid)).*,
DROP FOREIGN DATA WRAPPER addr_fdw CASCADE;
DROP PUBLICATION addr_pub;
DROP PUBLICATION addr_pub_schema;
+DROP PUBLICATION addr_pub_schema2;
DROP SUBSCRIPTION regress_addr_sub;
DROP SCHEMA addr_nsp CASCADE;
diff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql
index 8539110025e..96b02947fa2 100644
--- a/src/test/regress/sql/publication.sql
+++ b/src/test/regress/sql/publication.sql
@@ -27,7 +27,7 @@ CREATE PUBLICATION testpub_xxx WITH (publish_via_partition_root = 'true', publis
\dRp
-ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete');
+ALTER PUBLICATION testpub_default SET (publish = 'insert, update, delete, sequence');
\dRp
@@ -46,6 +46,8 @@ ALTER PUBLICATION testpub_foralltables SET (publish = 'insert, update');
CREATE TABLE testpub_tbl2 (id serial primary key, data text);
-- fail - can't add to for all tables publication
ALTER PUBLICATION testpub_foralltables ADD TABLE testpub_tbl2;
+-- fail - can't add a table using ADD SEQUENCE command
+ALTER PUBLICATION testpub_foralltables ADD SEQUENCE testpub_tbl2;
-- fail - can't drop from all tables publication
ALTER PUBLICATION testpub_foralltables DROP TABLE testpub_tbl2;
-- fail - can't add to for all tables publication
@@ -104,6 +106,199 @@ RESET client_min_messages;
DROP TABLE testpub_tbl3, testpub_tbl3a;
DROP PUBLICATION testpub3, testpub4;
+--- adding sequences
+CREATE SEQUENCE testpub_seq0;
+CREATE SEQUENCE pub_test.testpub_seq1;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES WITH (publish = 'sequence');
+RESET client_min_messages;
+ALTER PUBLICATION testpub_forallsequences SET (publish = 'insert, sequence');
+
+CREATE SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences ADD SEQUENCE testpub_seq2;
+-- fail - can't drop from all sequences publication
+ALTER PUBLICATION testpub_forallsequences DROP SEQUENCE testpub_seq2;
+-- fail - can't add to for all sequences publication
+ALTER PUBLICATION testpub_forallsequences SET SEQUENCE pub_test.testpub_seq1;
+
+-- fail - can't add schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences ADD ALL SEQUENCES IN SCHEMA pub_test;
+-- fail - can't drop schema from 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences DROP ALL SEQUENCES IN SCHEMA pub_test;
+-- fail - can't set schema to 'FOR ALL SEQUENCES' publication
+ALTER PUBLICATION testpub_forallsequences SET ALL SEQUENCES IN SCHEMA pub_test;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forsequence FOR SEQUENCE testpub_seq0;
+RESET client_min_messages;
+-- should be able to add schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence ADD ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+-- fail - can't add sequence from the schema we already added
+ALTER PUBLICATION testpub_forsequence ADD SEQUENCE pub_test.testpub_seq1;
+-- fail - can't add sequence using ADD TABLE command
+ALTER PUBLICATION testpub_forsequence ADD TABLE pub_test.testpub_seq1;
+-- should be able to drop schema from 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+-- should be able to set schema to 'FOR SEQUENCE' publication
+ALTER PUBLICATION testpub_forsequence SET ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_forsequence
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_forschema FOR ALL SEQUENCES IN SCHEMA pub_test;
+RESET client_min_messages;
+-- fail - can't create publication with schema and sequence of the same schema
+CREATE PUBLICATION testpub_for_seq_schema FOR ALL SEQUENCES IN SCHEMA pub_test, SEQUENCE pub_test.testpub_seq1;
+-- fail - can't add a sequence of the same schema to the schema publication
+ALTER PUBLICATION testpub_forschema ADD SEQUENCE pub_test.testpub_seq1;
+-- fail - can't drop a sequence from the schema publication which isn't in the
+-- publication
+ALTER PUBLICATION testpub_forschema DROP SEQUENCE pub_test.testpub_seq1;
+-- should be able to set sequence to schema publication
+ALTER PUBLICATION testpub_forschema SET SEQUENCE pub_test.testpub_seq1;
+\dRp+ testpub_forschema
+
+SELECT pubname, puballtables, puballsequences FROM pg_publication WHERE pubname = 'testpub_forallsequences';
+\d+ pub_test.testpub_seq1
+\dRp+ testpub_forallsequences
+DROP SEQUENCE testpub_seq0, pub_test.testpub_seq1, testpub_seq2;
+DROP PUBLICATION testpub_forallsequences, testpub_forsequence, testpub_forschema;
+
+
+-- publication testing multiple sequences at the same time
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE testpub_seq2;
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_multi FOR SEQUENCE testpub_seq1, testpub_seq2;
+RESET client_min_messages;
+
+\dRp+ testpub_multi
+
+DROP PUBLICATION testpub_multi;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE testpub_seq2;
+
+
+-- Publication mixing tables and sequences
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_mix;
+RESET client_min_messages;
+
+CREATE SEQUENCE testpub_seq1;
+CREATE SEQUENCE pub_test.testpub_seq2;
+
+ALTER PUBLICATION testpub_mix ADD SEQUENCE testpub_seq1, TABLE testpub_tbl1;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix ADD ALL SEQUENCES IN SCHEMA pub_test, ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix DROP ALL SEQUENCES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+ALTER PUBLICATION testpub_mix DROP ALL TABLES IN SCHEMA pub_test;
+\dRp+ testpub_mix
+
+DROP PUBLICATION testpub_mix;
+DROP SEQUENCE testpub_seq1;
+DROP SEQUENCE pub_test.testpub_seq2;
+
+
+-- make sure we replicate only the correct relation type
+CREATE SCHEMA pub_test1;
+CREATE SEQUENCE pub_test1.test_seq1;
+CREATE TABLE pub_test1.test_tbl1 (a int primary key, b int);
+
+CREATE SCHEMA pub_test2;
+CREATE SEQUENCE pub_test2.test_seq2;
+CREATE TABLE pub_test2.test_tbl2 (a int primary key, b int);
+
+SET client_min_messages = 'ERROR';
+CREATE PUBLICATION testpub_schemas;
+RESET client_min_messages;
+
+-- add tables from one schema, sequences from the other
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test1;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- add the other object type from each schema
+ALTER PUBLICATION testpub_schemas ADD ALL TABLES IN SCHEMA pub_test1;
+ALTER PUBLICATION testpub_schemas ADD ALL SEQUENCES IN SCHEMA pub_test2;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- now drop the object type added first
+ALTER PUBLICATION testpub_schemas DROP ALL TABLES IN SCHEMA pub_test2;
+ALTER PUBLICATION testpub_schemas DROP ALL SEQUENCES IN SCHEMA pub_test1;
+
+\dRp+ testpub_schemas
+
+\dn+ pub_test1
+\dn+ pub_test2
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- should fail (publication contains the whole schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test1.test_tbl1;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test2.test_seq2;
+
+-- should work (different schema)
+ALTER PUBLICATION testpub_schemas ADD TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas ADD SEQUENCE pub_test1.test_seq1;
+
+\dRp+ testpub_schemas
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+-- now drop the explicitly added objects again
+ALTER PUBLICATION testpub_schemas DROP TABLE pub_test2.test_tbl2;
+ALTER PUBLICATION testpub_schemas DROP SEQUENCE pub_test1.test_seq1;
+
+\dRp+ testpub_schemas
+
+\d+ pub_test1.test_seq1;
+\d+ pub_test1.test_tbl1;
+
+\d+ pub_test2.test_seq2;
+\d+ pub_test2.test_tbl2;
+
+DROP PUBLICATION testpub_schemas;
+DROP TABLE pub_test1.test_tbl1, pub_test2.test_tbl2;
+DROP SEQUENCE pub_test1.test_seq1, pub_test2.test_seq2;
+DROP SCHEMA pub_test1, pub_test2;
+
-- Tests for partitioned tables
SET client_min_messages = 'ERROR';
CREATE PUBLICATION testpub_forparted;
@@ -1004,32 +1199,51 @@ CREATE SCHEMA sch1;
CREATE SCHEMA sch2;
CREATE TABLE sch1.tbl1 (a int) PARTITION BY RANGE(a);
CREATE TABLE sch2.tbl1_part1 PARTITION OF sch1.tbl1 FOR VALUES FROM (1) to (10);
+CREATE SEQUENCE sch1.seq1;
+CREATE SEQUENCE sch2.seq2;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
+ALTER PUBLICATION pub ADD SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Schema publication that does not include the schema that has the parent table
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch2 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
-- Table publication that does not include the parent table
CREATE PUBLICATION pub FOR TABLE sch2.tbl1_part1 WITH (PUBLISH_VIA_PARTITION_ROOT=0);
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
-- Table publication that includes both the parent table and the child table
ALTER PUBLICATION pub ADD TABLE sch1.tbl1;
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch2;
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
DROP PUBLICATION pub;
DROP TABLE sch2.tbl1_part1;
@@ -1042,10 +1256,36 @@ CREATE TABLE sch1.tbl1_part3 (a int) PARTITION BY RANGE(a);
ALTER TABLE sch1.tbl1 ATTACH PARTITION sch1.tbl1_part3 FOR VALUES FROM (20) to (30);
CREATE PUBLICATION pub FOR ALL TABLES IN SCHEMA sch1 WITH (PUBLISH_VIA_PARTITION_ROOT=1);
SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Schema publication
+CREATE PUBLICATION pub FOR SEQUENCE sch2.seq2;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+DROP PUBLICATION pub;
+-- Sequence publication
+CREATE PUBLICATION pub FOR ALL SEQUENCES IN SCHEMA sch2;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub ADD SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub DROP SEQUENCE sch1.seq1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
+
+ALTER PUBLICATION pub ADD ALL SEQUENCES IN SCHEMA sch1;
+SELECT * FROM pg_publication_tables;
+SELECT * FROM pg_publication_sequences;
RESET client_min_messages;
DROP PUBLICATION pub;
DROP TABLE sch1.tbl1;
+DROP SEQUENCE sch1.seq1, sch2.seq2;
DROP SCHEMA sch1 cascade;
DROP SCHEMA sch2 cascade;
diff --git a/src/test/subscription/t/030_sequences.pl b/src/test/subscription/t/030_sequences.pl
new file mode 100644
index 00000000000..9ae3c03d7d1
--- /dev/null
+++ b/src/test/subscription/t/030_sequences.pl
@@ -0,0 +1,202 @@
+
+# Copyright (c) 2021, PostgreSQL Global Development Group
+
+# This tests that sequences are replicated correctly by logical replication
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Initialize publisher node
+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+# Create subscriber node
+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+# Create some preexisting content on publisher
+my $ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
+ CREATE SEQUENCE s;
+);
+
+# Setup structure on the publisher
+$node_publisher->safe_psql('postgres', $ddl);
+
+# Create some the same structure on subscriber, and an extra sequence that
+# we'll create on the publisher later
+$ddl = qq(
+ CREATE TABLE seq_test (v BIGINT);
+ CREATE SEQUENCE s;
+ CREATE SEQUENCE s2;
+);
+
+$node_subscriber->safe_psql('postgres', $ddl);
+
+# Setup logical replication
+my $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+$node_publisher->safe_psql('postgres',
+ "CREATE PUBLICATION seq_pub");
+
+$node_publisher->safe_psql('postgres',
+ "ALTER PUBLICATION seq_pub ADD SEQUENCE s");
+
+$node_subscriber->safe_psql('postgres',
+ "CREATE SUBSCRIPTION seq_sub CONNECTION '$publisher_connstr' PUBLICATION seq_pub"
+);
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for initial sync to finish as well
+my $synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Insert initial test data
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ -- generate a number of values using the sequence
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+my $result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '132|0|t',
+ 'initial test data replicated');
+
+
+# advance the sequence in a rolled-back transaction - the rollback
+# does not wait for the replication, so we could see any intermediate state
+# so do something else after the test, to ensure we wait for everything
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);
+ ROLLBACK;
+ INSERT INTO seq_test VALUES (-1);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s;
+));
+
+is( $result, '231|0|t',
+ 'advance sequence in rolled-back transaction');
+
+
+# create a new sequence and roll it back - should not be replicated, due to
+# the transactional behavior
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '1|0|f',
+ 'create new sequence and roll it back');
+
+
+# create a new sequence, advance it in a rolled-back transaction, but commit
+# the create - the advance should be replicated nevertheless
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ CREATE SEQUENCE s2;
+ ALTER PUBLICATION seq_pub ADD SEQUENCE s2;
+ SAVEPOINT sp1;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO sp1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Wait for sync of the second sequence we just added to finish
+$synced_query =
+ "SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('s', 'r');";
+$node_subscriber->poll_query_until('postgres', $synced_query)
+ or die "Timed out while waiting for subscriber to synchronize data";
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '132|0|t',
+ 'create sequence, advance it in rolled-back transaction, but commit the create');
+
+
+# advance the new sequence in a transaction, and roll it back - the rollback
+# does not wait for the replication, so we could see any intermediate state
+# so do something else after the test, to ensure we wait for everything
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK;
+ INSERT INTO seq_test VALUES (-1);
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '231|0|t',
+ 'advance the new sequence in a transaction and roll it back');
+
+
+# advance the sequence in a subtransaction - the subtransaction gets rolled
+# back, but commit the main one - the changes should still be replicated
+$node_publisher->safe_psql(
+ 'postgres', qq(
+ BEGIN;
+ SAVEPOINT s1;
+ INSERT INTO seq_test SELECT nextval('s2') FROM generate_series(1,100);
+ ROLLBACK TO s1;
+ COMMIT;
+));
+
+$node_publisher->wait_for_catchup('seq_sub');
+
+# Check the data on subscriber
+$result = $node_subscriber->safe_psql(
+ 'postgres', qq(
+ SELECT * FROM s2;
+));
+
+is( $result, '330|0|t',
+ 'advance sequence in a subtransaction');
+
+
+done_testing();
Some typos I found before the patch was reverted.
diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml
index a6ea6ff3fcf..d4bd8d41c4b 100644
--- a/doc/src/sgml/logicaldecoding.sgml
+++ b/doc/src/sgml/logicaldecoding.sgml
@@ -834,9 +834,8 @@ typedef void (*LogicalDecodeSequenceCB) (struct LogicalDecodingContext *ctx,
non-transactional increments, the transaction may be either NULL or not
NULL, depending on if the transaction already has an XID assigned.
The <parameter>sequence_lsn</parameter> has the WAL location of the
- sequence update. <parameter>transactional</parameter> says if the
- sequence has to be replayed as part of the transaction or directly.
-
+ sequence update. <parameter>transactional</parameter> indicates whether
+ the sequence has to be replayed as part of the transaction or directly.
The <parameter>last_value</parameter>, <parameter>log_cnt</parameter> and
<parameter>is_called</parameter> parameters describe the sequence change.
</para>
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 60866431db3..b71122cce5d 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -927,7 +927,7 @@ ReorderBufferQueueMessage(ReorderBuffer *rb, TransactionId xid,
* Treat the sequence increment as transactional?
*
* The hash table tracks all sequences created in in-progress transactions,
- * so we simply do a lookup (the sequence is identified by relfilende). If
+ * so we simply do a lookup (the sequence is identified by relfilenode). If
* we find a match, the increment should be handled as transactional.
*/
bool
@@ -2255,7 +2255,7 @@ ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,
tuple = &change->data.sequence.tuple->tuple;
seq = (Form_pg_sequence_data) GETSTRUCT(tuple);
- /* Only ever called from ReorderBufferApplySequence, so transational. */
+ /* Only ever called from ReorderBufferApplySequence, so transactional. */
if (streaming)
rb->stream_sequence(rb, txn, change->lsn, relation, true,
seq->last_value, seq->log_cnt, seq->is_called);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index a15ce9edb13..8193bfe6515 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -5601,7 +5601,7 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
GetSchemaPublications(schemaid, objType));
/*
- * If this is a partion (and thus a table), lookup all ancestors and track
+ * If this is a partition (and thus a table), lookup all ancestors and track
* all publications them too.
*/
if (relation->rd_rel->relispartition)
But of course, if we expect/require to have a perfect snapshot for that
exact position in the transaction, this won't work. IMO the whole idea
that we can have non-transactional bits in naturally transactional
decoding seems a bit suspicious (at least in hindsight).No matter what we do for sequences, though, this still affects logical
messages too. Not sure what to do there :-(
Hi, I spent some time trying to understand this problem while I was
evaluating its impact on the DDL replication in [1]/messages/by-id/CAAD30U+pVmfKwUKy8cbZOnUXyguJ-uBNejwD75Kyo=OjdQGJ9g@mail.gmail.com. I think for DDL
we could always remove the
non-transactional bits since DDL will probably always be processed
transactionally.
I attempted to solve the problem for messages. Here is a potential
solution by keeping track of
the last decoded/acked non-transactional message/operation lsn and use
it to check if a non-transactional message record should be skipped
during decoding,
to do that I added new fields
ReplicationSlotPersistentData.non_xact_op_at,
XLogReaderState.NonXactOpRecPtr and
SnapBuild.start_decoding_nonxactop_at.
This is the end LSN of the last non-transactional message/operation
decoded/acked. I verified this approach solves the issue of
missing decoding of non-transactional messages under
concurrency/before the builder state reaches SNAPBUILD_CONSISTENT.
Once
the builder state reach SNAPBUILD_CONSISTENT, the new field
ReplicationSlotPersistentData.non_xact_op_at can be set
to ReplicationSlotPersistentData.confirmed_flush.
Similar to the sequence issue, here is the test case for logical messages:
Test concurrent execution in 3 sessions that allows pg_logical_emit_message in
session-2 to happen before we reach a consistent point and commit
happens after a consistent point:
Session-2:
Begin;
SELECT pg_current_xact_id();
Session-1:
SELECT 'init' FROM pg_create_logical_replication_slot('test_slot',
'test_decoding', false, true);
Session-3:
Begin;
SELECT pg_current_xact_id();
Session-2:
Commit;
Begin;
SELECT pg_logical_emit_message(true, 'test_decoding', 'msg1');
SELECT pg_logical_emit_message(false, 'test_decoding', 'msg2');
Session-3:
Commit;
Session-1: (at this point, the session will crash without the fix)
SELECT data FROM pg_logical_slot_get_changes('test_slot', NULL, NULL,
'force-binary', '0', 'skip-empty-xacts', '1');
data
---------------------------------------------------------------------
message: transactional: 0 prefix: test_decoding, sz: 4 content:msg1
Session-2:
Commit;
Session-1:
SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,
NULL, 'force-binary', '0', 'skip-empty-xacts', '1');
data
---------------------------------------------------------------------
message: transactional: 1 prefix: test_decoding, sz: 4 content:msg2
I also tried the same approach on sequences (on a commit before the
revert of sequence replication) and it seems to be working but
I think it needs further testing.
Patch 0001-Intorduce-new-field-ReplicationSlotPersistentData.no.patch
applies on master which contains the fix for logical messages.
[1]: /messages/by-id/CAAD30U+pVmfKwUKy8cbZOnUXyguJ-uBNejwD75Kyo=OjdQGJ9g@mail.gmail.com
Thoughts?
With Regards,
Zheng
Attachments:
0001-Intorduce-new-field-ReplicationSlotPersistentData.no.patchapplication/octet-stream; name=0001-Intorduce-new-field-ReplicationSlotPersistentData.no.patchDownload
From 34621aef44b38af447c670f94da9e703bdd495fa Mon Sep 17 00:00:00 2001
From: "Zheng (Zane) Li" <zhelli@amazon.com>
Date: Fri, 20 May 2022 19:55:50 +0000
Subject: [PATCH] Intorduce new field
ReplicationSlotPersistentData.non_xact_op_at, XLogReaderState.NonXactOpRecPtr
and SnapBuild.start_decoding_nonxactop_at. This is the end lsn of last
non-transactional message/operation decoded. We introduce this new field in
an attempt to solve the issue of missing decoding of non-transactional
message/operation under concurrency/before build state reach
SNAPBUILD_CONSISTENT. Once the build state reach SNAPBUILD_CONSISTENT,
non_xact_op_at can be set to ReplicationSlotPersistentData.confirmed_flush.
---
src/backend/replication/logical/decode.c | 18 ++++++++++---
src/backend/replication/logical/logical.c | 25 +++++++++++++++---
.../replication/logical/logicalfuncs.c | 2 +-
src/backend/replication/logical/snapbuild.c | 26 +++++++++++++++++--
src/backend/replication/slotfuncs.c | 2 +-
src/backend/replication/walsender.c | 2 +-
src/include/access/xlogreader.h | 3 +++
src/include/replication/logical.h | 2 +-
src/include/replication/slot.h | 9 +++++++
src/include/replication/snapbuild.h | 5 +++-
10 files changed, 80 insertions(+), 14 deletions(-)
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index aa2427ba73..a0b700a662 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -589,12 +589,24 @@ logicalmsg_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
if (message->transactional &&
!SnapBuildProcessChange(builder, xid, buf->origptr))
return;
+
else if (!message->transactional &&
- (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||
- SnapBuildXactNeedsSkip(builder, buf->origptr)))
+ ((SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT) ||
+ /* Can't prcess non-transactional msg immediately if consumer functions are not provided */
+ ctx->prepare_write == NULL ||
+ SnapBuildNonXactOpNeedsSkip(builder, buf->origptr)))
return;
- snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
+ /*
+ * Build or get snapshot for non-transactional msg for immedieate processing.
+ * Transactional msg will be queued and processed upon decoding of the commit
+ * record.
+ */
+ if (!message->transactional)
+ {
+ snapshot = SnapBuildGetOrBuildSnapshot(builder, xid);
+ }
+
ReorderBufferQueueMessage(ctx->reorder, xid, snapshot, buf->endptr,
message->transactional,
message->message, /* first part of message is
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index 625a7f4273..302a61efb8 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -208,7 +208,8 @@ StartupDecodingContext(List *output_plugin_options,
ctx->reorder = ReorderBufferAllocate();
ctx->snapshot_builder =
AllocateSnapshotBuilder(ctx->reorder, xmin_horizon, start_lsn,
- need_full_snapshot, slot->data.two_phase_at);
+ need_full_snapshot, slot->data.two_phase_at,
+ slot->data.non_xact_op_at);
ctx->reorder->private_data = ctx;
@@ -1216,6 +1217,12 @@ message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
ctx->callbacks.message_cb(ctx, txn, message_lsn, transactional, prefix,
message_size, message);
+ if (!transactional)
+ {
+ SnapBuildSetNonXactOpAt(ctx->snapshot_builder, message_lsn);
+ ctx->reader->NonXactOpRecPtr = message_lsn;
+ }
+
/* Pop the error context stack */
error_context_stack = errcallback.previous;
}
@@ -1531,6 +1538,12 @@ stream_message_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,
ctx->callbacks.stream_message_cb(ctx, txn, message_lsn, transactional, prefix,
message_size, message);
+ if (!transactional)
+ {
+ SnapBuildSetNonXactOpAt(ctx->snapshot_builder, message_lsn);
+ ctx->reader->NonXactOpRecPtr = message_lsn;
+ }
+
/* Pop the error context stack */
error_context_stack = errcallback.previous;
}
@@ -1648,7 +1661,7 @@ LogicalIncreaseXminForSlot(XLogRecPtr current_lsn, TransactionId xmin)
/* candidate already valid with the current flush position, apply */
if (updated_xmin)
- LogicalConfirmReceivedLocation(slot->data.confirmed_flush);
+ LogicalConfirmReceivedLocation(slot->data.confirmed_flush, InvalidXLogRecPtr);
}
/*
@@ -1726,14 +1739,14 @@ LogicalIncreaseRestartDecodingForSlot(XLogRecPtr current_lsn, XLogRecPtr restart
/* candidates are already valid with the current flush position, apply */
if (updated_lsn)
- LogicalConfirmReceivedLocation(slot->data.confirmed_flush);
+ LogicalConfirmReceivedLocation(slot->data.confirmed_flush, InvalidXLogRecPtr);
}
/*
* Handle a consumer's confirmation having received all changes up to lsn.
*/
void
-LogicalConfirmReceivedLocation(XLogRecPtr lsn)
+LogicalConfirmReceivedLocation(XLogRecPtr lsn, XLogRecPtr lsn_nonxact)
{
Assert(lsn != InvalidXLogRecPtr);
@@ -1747,6 +1760,8 @@ LogicalConfirmReceivedLocation(XLogRecPtr lsn)
SpinLockAcquire(&MyReplicationSlot->mutex);
MyReplicationSlot->data.confirmed_flush = lsn;
+ if (lsn_nonxact != InvalidXLogRecPtr)
+ MyReplicationSlot->data.non_xact_op_at = lsn_nonxact;
/* if we're past the location required for bumping xmin, do so */
if (MyReplicationSlot->candidate_xmin_lsn != InvalidXLogRecPtr &&
@@ -1812,6 +1827,8 @@ LogicalConfirmReceivedLocation(XLogRecPtr lsn)
{
SpinLockAcquire(&MyReplicationSlot->mutex);
MyReplicationSlot->data.confirmed_flush = lsn;
+ if (lsn_nonxact != InvalidXLogRecPtr)
+ MyReplicationSlot->data.non_xact_op_at = lsn_nonxact;
SpinLockRelease(&MyReplicationSlot->mutex);
}
}
diff --git a/src/backend/replication/logical/logicalfuncs.c b/src/backend/replication/logical/logicalfuncs.c
index 6058d36e0d..5d579218d8 100644
--- a/src/backend/replication/logical/logicalfuncs.c
+++ b/src/backend/replication/logical/logicalfuncs.c
@@ -294,7 +294,7 @@ pg_logical_slot_get_changes_guts(FunctionCallInfo fcinfo, bool confirm, bool bin
*/
if (ctx->reader->EndRecPtr != InvalidXLogRecPtr && confirm)
{
- LogicalConfirmReceivedLocation(ctx->reader->EndRecPtr);
+ LogicalConfirmReceivedLocation(ctx->reader->EndRecPtr, ctx->reader->NonXactOpRecPtr);
/*
* If only the confirmed_flush_lsn has changed the slot won't get
diff --git a/src/backend/replication/logical/snapbuild.c b/src/backend/replication/logical/snapbuild.c
index 1119a12db9..d5d378836a 100644
--- a/src/backend/replication/logical/snapbuild.c
+++ b/src/backend/replication/logical/snapbuild.c
@@ -175,6 +175,8 @@ struct SnapBuild
*/
XLogRecPtr two_phase_at;
+ XLogRecPtr start_decoding_nonxactop_at;
+
/*
* Don't start decoding WAL until the "xl_running_xacts" information
* indicates there are no running xids with an xid smaller than this.
@@ -281,7 +283,8 @@ AllocateSnapshotBuilder(ReorderBuffer *reorder,
TransactionId xmin_horizon,
XLogRecPtr start_lsn,
bool need_full_snapshot,
- XLogRecPtr two_phase_at)
+ XLogRecPtr two_phase_at,
+ XLogRecPtr non_xact_op_at)
{
MemoryContext context;
MemoryContext oldcontext;
@@ -310,6 +313,7 @@ AllocateSnapshotBuilder(ReorderBuffer *reorder,
builder->start_decoding_at = start_lsn;
builder->building_full_snapshot = need_full_snapshot;
builder->two_phase_at = two_phase_at;
+ builder->start_decoding_nonxactop_at = non_xact_op_at+1;
MemoryContextSwitchTo(oldcontext);
@@ -387,6 +391,15 @@ SnapBuildSetTwoPhaseAt(SnapBuild *builder, XLogRecPtr ptr)
builder->two_phase_at = ptr;
}
+/*
+ * Set the LSN at which non-transactional op decoding is at.
+ */
+void
+SnapBuildSetNonXactOpAt(SnapBuild *builder, XLogRecPtr ptr)
+{
+ builder->start_decoding_nonxactop_at = ptr;
+}
+
/*
* Should the contents of transaction ending at 'ptr' be decoded?
*/
@@ -396,6 +409,15 @@ SnapBuildXactNeedsSkip(SnapBuild *builder, XLogRecPtr ptr)
return ptr < builder->start_decoding_at;
}
+/*
+ * Should the contents of a non-transactional op ending at 'ptr' be decoded?
+ */
+bool
+SnapBuildNonXactOpNeedsSkip(SnapBuild *builder, XLogRecPtr ptr)
+{
+ return ptr < builder->start_decoding_nonxactop_at;
+}
+
/*
* Increase refcount of a snapshot.
*
@@ -661,7 +683,7 @@ SnapBuildExportSnapshot(SnapBuild *builder)
Snapshot
SnapBuildGetOrBuildSnapshot(SnapBuild *builder, TransactionId xid)
{
- Assert(builder->state == SNAPBUILD_CONSISTENT);
+ Assert(builder->state >= SNAPBUILD_FULL_SNAPSHOT);
/* only build a new snapshot if we don't have a prebuilt one */
if (builder->snapshot == NULL)
diff --git a/src/backend/replication/slotfuncs.c b/src/backend/replication/slotfuncs.c
index ca945994ef..3bbbe74058 100644
--- a/src/backend/replication/slotfuncs.c
+++ b/src/backend/replication/slotfuncs.c
@@ -530,7 +530,7 @@ pg_logical_replication_slot_advance(XLogRecPtr moveto)
if (ctx->reader->EndRecPtr != InvalidXLogRecPtr)
{
- LogicalConfirmReceivedLocation(moveto);
+ LogicalConfirmReceivedLocation(moveto, moveto);
/*
* If only the confirmed_flush LSN has changed the slot won't get
diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
index e42671722a..b1373e52fd 100644
--- a/src/backend/replication/walsender.c
+++ b/src/backend/replication/walsender.c
@@ -2170,7 +2170,7 @@ ProcessStandbyReplyMessage(void)
if (MyReplicationSlot && flushPtr != InvalidXLogRecPtr)
{
if (SlotIsLogical(MyReplicationSlot))
- LogicalConfirmReceivedLocation(flushPtr);
+ LogicalConfirmReceivedLocation(flushPtr, flushPtr);
else
PhysicalConfirmReceivedLocation(flushPtr);
}
diff --git a/src/include/access/xlogreader.h b/src/include/access/xlogreader.h
index e73ea4a840..b197a6a354 100644
--- a/src/include/access/xlogreader.h
+++ b/src/include/access/xlogreader.h
@@ -206,6 +206,9 @@ struct XLogReaderState
XLogRecPtr ReadRecPtr; /* start of last record read */
XLogRecPtr EndRecPtr; /* end+1 of last record read */
+ /* end of last non-transactional op record read */
+ XLogRecPtr NonXactOpRecPtr;
+
/*
* Set at the end of recovery: the start point of a partial record at the
* end of WAL (InvalidXLogRecPtr if there wasn't one), and the start
diff --git a/src/include/replication/logical.h b/src/include/replication/logical.h
index edadacd589..c9714b51f0 100644
--- a/src/include/replication/logical.h
+++ b/src/include/replication/logical.h
@@ -136,7 +136,7 @@ extern void FreeDecodingContext(LogicalDecodingContext *ctx);
extern void LogicalIncreaseXminForSlot(XLogRecPtr lsn, TransactionId xmin);
extern void LogicalIncreaseRestartDecodingForSlot(XLogRecPtr current_lsn,
XLogRecPtr restart_lsn);
-extern void LogicalConfirmReceivedLocation(XLogRecPtr lsn);
+extern void LogicalConfirmReceivedLocation(XLogRecPtr lsn, XLogRecPtr lsn_nonxact);
extern bool filter_prepare_cb_wrapper(LogicalDecodingContext *ctx,
TransactionId xid, const char *gid);
diff --git a/src/include/replication/slot.h b/src/include/replication/slot.h
index 8c9f3321d5..941c6f6da3 100644
--- a/src/include/replication/slot.h
+++ b/src/include/replication/slot.h
@@ -83,6 +83,15 @@ typedef struct ReplicationSlotPersistentData
*/
XLogRecPtr confirmed_flush;
+ /*
+ * Oldest LSN that the client has acked receipt for a non-transactional
+ * operation such as a non-transactional message. This is introduced
+ * to slove missing non-transactional decoding when creating logical
+ * replication slot under concurrency. Under normal situation
+ * (SNAPBUILD_CONSISTENT) we can make non_xact_op_at = confirmed_flush
+ */
+ XLogRecPtr non_xact_op_at;
+
/*
* LSN at which we enabled two_phase commit for this slot or LSN at which
* we found a consistent point at the time of slot creation.
diff --git a/src/include/replication/snapbuild.h b/src/include/replication/snapbuild.h
index d179251aad..7b055dba7c 100644
--- a/src/include/replication/snapbuild.h
+++ b/src/include/replication/snapbuild.h
@@ -62,7 +62,8 @@ extern void CheckPointSnapBuild(void);
extern SnapBuild *AllocateSnapshotBuilder(struct ReorderBuffer *cache,
TransactionId xmin_horizon, XLogRecPtr start_lsn,
bool need_full_snapshot,
- XLogRecPtr two_phase_at);
+ XLogRecPtr two_phase_at,
+ XLogRecPtr non_xact_op_at);
extern void FreeSnapshotBuilder(SnapBuild *cache);
extern void SnapBuildSnapDecRefcount(Snapshot snap);
@@ -77,8 +78,10 @@ extern Snapshot SnapBuildGetOrBuildSnapshot(SnapBuild *builder,
TransactionId xid);
extern bool SnapBuildXactNeedsSkip(SnapBuild *snapstate, XLogRecPtr ptr);
+extern bool SnapBuildNonXactOpNeedsSkip(SnapBuild *builder, XLogRecPtr ptr);
extern XLogRecPtr SnapBuildGetTwoPhaseAt(SnapBuild *builder);
extern void SnapBuildSetTwoPhaseAt(SnapBuild *builder, XLogRecPtr ptr);
+extern void SnapBuildSetNonXactOpAt(SnapBuild *builder, XLogRecPtr ptr);
extern void SnapBuildCommitTxn(SnapBuild *builder, XLogRecPtr lsn,
TransactionId xid, int nsubxacts,
--
2.32.0
On Thu, Apr 07, 2022 at 08:34:50PM +0200, Tomas Vondra wrote:
I've pushed a revert af all the commits related to this - decoding of
sequences and test_decoding / built-in replication changes.
Two July buildfarm runs failed with PANIC during standby promotion:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2022-07-19%2004%3A13%3A18
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2022-07-31%2011%3A33%3A13
The attached patch hacks things so an ordinary x86_64 GNU/Linux machine
reproduces this consistently. "git bisect" then traced the regression to the
above revert commit (2c7ea57e56ca5f668c32d4266e0a3e45b455bef5). The pg_ctl
test suite passes under this hack in all supported branches, and it passed on
v15 until that revert. Would you investigate?
The buildfarm animal uses keep_error_builds. From kept data directories, I
deduced these events:
- After the base backup, auto-analyze ran on the primary and wrote WAL.
- Standby streamed and wrote up to 0/301FFF.
- Standby received the promote signal. Terminated streaming. WAL page at 0/302000 remained all-zeros.
- Somehow, end-of-recovery became a PANIC.
Key portions from buildfarm logs:
=== good run standby2 log
2022-07-21 22:55:16.860 UTC [25034912:5] LOG: received promote request
2022-07-21 22:55:16.878 UTC [26804682:2] FATAL: terminating walreceiver process due to administrator command
2022-07-21 22:55:16.878 UTC [25034912:6] LOG: invalid record length at 0/3000060: wanted 24, got 0
2022-07-21 22:55:16.878 UTC [25034912:7] LOG: redo done at 0/3000028 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.42 s
2022-07-21 22:55:16.878 UTC [25034912:8] LOG: selected new timeline ID: 2
2022-07-21 22:55:17.004 UTC [25034912:9] LOG: archive recovery complete
2022-07-21 22:55:17.005 UTC [23724044:1] LOG: checkpoint starting: force
2022-07-21 22:55:17.008 UTC [14549364:4] LOG: database system is ready to accept connections
2022-07-21 22:55:17.093 UTC [23724044:2] LOG: checkpoint complete: wrote 3 buffers (2.3%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.019 s, sync=0.001 s, total=0.089 s; sync files=0, longest=0.000 s, average=0.000 s; distance=16384 kB, estimate=16384 kB
2022-07-21 22:55:17.143 UTC [27394418:1] [unknown] LOG: connection received: host=[local]
2022-07-21 22:55:17.144 UTC [27394418:2] [unknown] LOG: connection authorized: user=nm database=postgres application_name=003_promote.pl
2022-07-21 22:55:17.147 UTC [27394418:3] 003_promote.pl LOG: statement: SELECT pg_is_in_recovery()
2022-07-21 22:55:17.148 UTC [27394418:4] 003_promote.pl LOG: disconnection: session time: 0:00:00.005 user=nm database=postgres host=[local]
2022-07-21 22:55:58.301 UTC [14549364:5] LOG: received immediate shutdown request
2022-07-21 22:55:58.337 UTC [14549364:6] LOG: database system is shut down
=== failed run standby2 log, with my annotations
2022-07-19 05:28:22.136 UTC [7340406:5] LOG: received promote request
2022-07-19 05:28:22.163 UTC [8519860:2] FATAL: terminating walreceiver process due to administrator command
2022-07-19 05:28:22.166 UTC [7340406:6] LOG: invalid magic number 0000 in log segment 000000010000000000000003, offset 131072
New compared to the good run. XLOG_PAGE_MAGIC didn't match. This implies the WAL ended at a WAL page boundary.
2022-07-19 05:28:22.166 UTC [7340406:7] LOG: redo done at 0/301F168 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.18 s
2022-07-19 05:28:22.166 UTC [7340406:8] LOG: last completed transaction was at log time 2022-07-19 05:28:13.956716+00
New compared to the good run. The good run had no transactions to replay. The bad run replayed records from an auto-analyze.
2022-07-19 05:28:22.166 UTC [7340406:9] PANIC: invalid record length at 0/301F168: wanted 24, got 0
More WAL overall in bad run, due to auto-analyze. End of recovery wrongly considered a PANIC.
2022-07-19 05:28:22.583 UTC [8388800:4] LOG: startup process (PID 7340406) was terminated by signal 6: IOT/Abort trap
2022-07-19 05:28:22.584 UTC [8388800:5] LOG: terminating any other active server processes
2022-07-19 05:28:22.587 UTC [8388800:6] LOG: shutting down due to startup process failure
2022-07-19 05:28:22.627 UTC [8388800:7] LOG: database system is shut down
Let me know if I've left out details you want; I may be able to dig more out
of the buildfarm artifacts.
Attachments:
promote-invalid-magic-panic-v0.patchtext/plain; charset=us-asciiDownload
Author: Noah Misch <noah@leadboat.com>
Commit: Noah Misch <noah@leadboat.com>
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index 8604fd4..6232237 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -885,6 +885,12 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr, TimeLineID tli)
{
int startoff;
int byteswritten;
+ bool spin_after = false;
+ /* force writing to end at a certain WAL page boundary */
+ if (recptr + nbytes > 0x03022000) {
+ nbytes = 0x03022000 - recptr;
+ spin_after = true;
+ }
Assert(tli != 0);
@@ -943,6 +949,8 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr, TimeLineID tli)
LogstreamResult.Write = recptr;
}
+ while (spin_after) /* we forced writing to end; wait for termination */
+ ProcessWalRcvInterrupts();
/* Update shared-memory status */
pg_atomic_write_u64(&WalRcv->writtenUpto, LogstreamResult.Write);
diff --git a/src/bin/pg_ctl/t/003_promote.pl b/src/bin/pg_ctl/t/003_promote.pl
index 84d28f4..57ae317 100644
--- a/src/bin/pg_ctl/t/003_promote.pl
+++ b/src/bin/pg_ctl/t/003_promote.pl
@@ -29,6 +29,7 @@ command_fails_like(
[ 'pg_ctl', '-D', $node_primary->data_dir, 'promote' ],
qr/not in standby mode/,
'pg_ctl promote of primary instance fails');
+if (1) { ok(1) for 1 .. 3; } else { # skip tests this way to avoid merge conflicts
my $node_standby = PostgreSQL::Test::Cluster->new('standby');
$node_primary->backup('my_backup');
@@ -46,12 +47,17 @@ ok( $node_standby->poll_query_until(
'postgres', 'SELECT NOT pg_is_in_recovery()'),
'promoted standby is not in recovery');
+}
+my $node_standby;
+$node_primary->backup('my_backup');
# same again with default wait option
$node_standby = PostgreSQL::Test::Cluster->new('standby2');
$node_standby->init_from_backup($node_primary, 'my_backup',
has_streaming => 1);
$node_standby->start;
+$node_primary->safe_psql('postgres', 'ANALYZE');
+
is($node_standby->safe_psql('postgres', 'SELECT pg_is_in_recovery()'),
't', 'standby is in recovery');
On 8/7/22 02:36, Noah Misch wrote:
On Thu, Apr 07, 2022 at 08:34:50PM +0200, Tomas Vondra wrote:
I've pushed a revert af all the commits related to this - decoding of
sequences and test_decoding / built-in replication changes.Two July buildfarm runs failed with PANIC during standby promotion:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2022-07-19%2004%3A13%3A18
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2022-07-31%2011%3A33%3A13The attached patch hacks things so an ordinary x86_64 GNU/Linux machine
reproduces this consistently. "git bisect" then traced the regression to the
above revert commit (2c7ea57e56ca5f668c32d4266e0a3e45b455bef5). The pg_ctl
test suite passes under this hack in all supported branches, and it passed on
v15 until that revert. Would you investigate?The buildfarm animal uses keep_error_builds. From kept data directories, I
deduced these events:- After the base backup, auto-analyze ran on the primary and wrote WAL.
- Standby streamed and wrote up to 0/301FFF.
- Standby received the promote signal. Terminated streaming. WAL page at 0/302000 remained all-zeros.
- Somehow, end-of-recovery became a PANIC.
I think it'd be really bizarre if this was due to the revert, as that
simply undoes minor WAL changes (and none of this should affect what
happens at WAL page boundary etc.). It just restores WAL as it was
before 0da92dc, nothing particularly complicated. I did go through all
of the changes again and I haven't spotted anything particularly
suspicious, but I'll give it another try tomorrow.
However, I did try bisecting this using the attached patch, and that
does not suggest the issue is in the revert commit. It actually fails
all the way back to 5dc0418fab2, and it starts working on 9553b4115f1.
...
6392f2a0968 Try to silence "-Wmissing-braces" complaints in ...
=> 5dc0418fab2 Prefetch data referenced by the WAL, take II.
9553b4115f1 Fix warning introduced in 5c279a6d350.
...
This is merely 10 commits before the revert, and it seems way more
related to WAL. Also, adding this to the two nodes in 003_standby.pl
makes the issue go away, it seems:
$node_standby->append_conf('postgresql.conf',
qq(recovery_prefetch = off));
I'd bet it's about WAL prefetching, not the revert, and the bisect was a
bit incorrect, because the commits are close and the failures happen to
be rare. (Presumably you first did the bisect and then wrote the patch
that reproduces this, right?)
Adding Thomas Munro to the thread, he's the WAL prefetching expert ;-)
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Sun, Aug 07, 2022 at 03:18:52PM +0200, Tomas Vondra wrote:
On 8/7/22 02:36, Noah Misch wrote:
On Thu, Apr 07, 2022 at 08:34:50PM +0200, Tomas Vondra wrote:
I've pushed a revert af all the commits related to this - decoding of
sequences and test_decoding / built-in replication changes.Two July buildfarm runs failed with PANIC during standby promotion:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2022-07-19%2004%3A13%3A18
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2022-07-31%2011%3A33%3A13The attached patch hacks things so an ordinary x86_64 GNU/Linux machine
reproduces this consistently. "git bisect" then traced the regression to the
above revert commit (2c7ea57e56ca5f668c32d4266e0a3e45b455bef5). The pg_ctl
test suite passes under this hack in all supported branches, and it passed on
v15 until that revert. Would you investigate?The buildfarm animal uses keep_error_builds. From kept data directories, I
deduced these events:- After the base backup, auto-analyze ran on the primary and wrote WAL.
- Standby streamed and wrote up to 0/301FFF.
- Standby received the promote signal. Terminated streaming. WAL page at 0/302000 remained all-zeros.
- Somehow, end-of-recovery became a PANIC.I think it'd be really bizarre if this was due to the revert, as that
simply undoes minor WAL changes (and none of this should affect what
happens at WAL page boundary etc.). It just restores WAL as it was
before 0da92dc, nothing particularly complicated. I did go through all
of the changes again and I haven't spotted anything particularly
suspicious, but I'll give it another try tomorrow.However, I did try bisecting this using the attached patch, and that
does not suggest the issue is in the revert commit. It actually fails
all the way back to 5dc0418fab2, and it starts working on 9553b4115f1....
6392f2a0968 Try to silence "-Wmissing-braces" complaints in ...
=> 5dc0418fab2 Prefetch data referenced by the WAL, take II.
9553b4115f1 Fix warning introduced in 5c279a6d350.
...This is merely 10 commits before the revert, and it seems way more
related to WAL. Also, adding this to the two nodes in 003_standby.pl
makes the issue go away, it seems:$node_standby->append_conf('postgresql.conf',
qq(recovery_prefetch = off));I'd bet it's about WAL prefetching, not the revert, and the bisect was a
bit incorrect, because the commits are close and the failures happen to
be rare. (Presumably you first did the bisect and then wrote the patch
that reproduces this, right?)
No. I wrote the patch, then used the patch to drive the bisect. With ten
iterations, commit 2c7ea57 passes 0/10, while 2c7ea57^ passes 10/10. I've now
tried recovery_prefetch=off. With that, the test passes 10/10 at 2c7ea57.
Given your observation of a failure at 5dc0418fab2, I agree with your
conclusion. Whatever the role of 2c7ea57 in exposing the failure on my
machine, a root cause in WAL prefetching looks more likely.
Show quoted text
Adding Thomas Munro to the thread, he's the WAL prefetching expert ;-)
On Mon, Aug 8, 2022 at 7:12 AM Noah Misch <noah@leadboat.com> wrote:
On Sun, Aug 07, 2022 at 03:18:52PM +0200, Tomas Vondra wrote:
I'd bet it's about WAL prefetching, not the revert, and the bisect was a
bit incorrect, because the commits are close and the failures happen to
be rare. (Presumably you first did the bisect and then wrote the patch
that reproduces this, right?)No. I wrote the patch, then used the patch to drive the bisect. With ten
iterations, commit 2c7ea57 passes 0/10, while 2c7ea57^ passes 10/10. I've now
tried recovery_prefetch=off. With that, the test passes 10/10 at 2c7ea57.
Given your observation of a failure at 5dc0418fab2, I agree with your
conclusion. Whatever the role of 2c7ea57 in exposing the failure on my
machine, a root cause in WAL prefetching looks more likely.Adding Thomas Munro to the thread, he's the WAL prefetching expert ;-)
Thanks for the repro patch and bisection work. Looking...
On Mon, Aug 8, 2022 at 9:09 AM Thomas Munro <thomas.munro@gmail.com> wrote:
Thanks for the repro patch and bisection work. Looking...
I don't have the complete explanation yet, but it's something like
this. We hit the following branch in xlogrecovery.c...
if (StandbyMode &&
!XLogReaderValidatePageHeader(xlogreader,
targetPagePtr, readBuf))
{
/*
* Emit this error right now then retry this page
immediately. Use
* errmsg_internal() because the message was already translated.
*/
if (xlogreader->errormsg_buf[0])
ereport(emode_for_corrupt_record(emode,
xlogreader->EndRecPtr),
(errmsg_internal("%s",
xlogreader->errormsg_buf)));
/* reset any error XLogReaderValidatePageHeader()
might have set */
xlogreader->errormsg_buf[0] = '\0';
goto next_record_is_invalid;
}
... but, even though there was a (suppressed) error, nothing
invalidates the reader's page buffer. Normally,
XLogReadValidatePageHeader() failure or any other kind of error
encountered by xlogreader.c'd decoding logic would do that, but here
the read_page callback is directly calling the header validation.
Without prefetching, that doesn't seem to matter, but reading ahead
can cause us to have the problem page in our buffer at the wrong time,
and then not re-read it when we should. Or something like that.
The attached patch that simply moves the cache invalidation into
report_invalid_record(), so that it's reached by the above code and
everything else that reports an error, seems to fix the problem in
src/bin/pg_ctl/t/003_promote.pl with Noah's spanner-in-the-works patch
applied, and passes check-world without it. I need to look at this
some more, though, and figure out if it's the right fix.
Attachments:
0001-WIP-Fix-XLogPageRead-cache-invalidation.patchtext/x-patch; charset=US-ASCII; name=0001-WIP-Fix-XLogPageRead-cache-invalidation.patchDownload
From 1f4fc659441a805a50b77f4e7810f9a25b5e62ae Mon Sep 17 00:00:00 2001
From: Thomas Munro <thomas.munro@gmail.com>
Date: Mon, 8 Aug 2022 14:09:32 +1200
Subject: [PATCH] WIP Fix XLogPageRead() cache invalidation.
When ReadPageInternal() reports an error, including due to failure of
XLogReaderValidatePageHeader(), the reader calls
XLogReaderInvalReaderState() to invalidate the contents of its buffer,
forcing a re-read next time.
This did not work if a header validation error was detected by the
read_page callback itself. XLogPageRead() calls
XLogValidatePageHeader() directly, and if that failed, there was no
cache invalidation.
Fix, by invalidating the cache directly in report_invalid_record(), so
that XLogValidatePageHeader()'s failure invalidates the cache even with
called directly by xlogrecovery.c.
XXX More research/testing needed
diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c
index 06e91547dd..e6431722e7 100644
--- a/src/backend/access/transam/xlogreader.c
+++ b/src/backend/access/transam/xlogreader.c
@@ -81,6 +81,12 @@ report_invalid_record(XLogReaderState *state, const char *fmt,...)
va_end(args);
state->errormsg_deferred = true;
+
+ /*
+ * Invalidate the read state. We might read from a different source after
+ * failure.
+ */
+ XLogReaderInvalReadState(state);
}
/*
@@ -1066,11 +1072,6 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)
return readLen;
err:
- if (state->errormsg_buf[0] != '\0')
- {
- state->errormsg_deferred = true;
- XLogReaderInvalReadState(state);
- }
return XLREAD_FAIL;
}
--
2.37.1
At Mon, 8 Aug 2022 18:15:46 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in
On Mon, Aug 8, 2022 at 9:09 AM Thomas Munro <thomas.munro@gmail.com> wrote:
Thanks for the repro patch and bisection work. Looking...
I don't have the complete explanation yet, but it's something like
this. We hit the following branch in xlogrecovery.c...if (StandbyMode &&
!XLogReaderValidatePageHeader(xlogreader,
targetPagePtr, readBuf))
{
/*
* Emit this error right now then retry this page
immediately. Use
* errmsg_internal() because the message was already translated.
*/
if (xlogreader->errormsg_buf[0])
ereport(emode_for_corrupt_record(emode,
xlogreader->EndRecPtr),
(errmsg_internal("%s",
xlogreader->errormsg_buf)));/* reset any error XLogReaderValidatePageHeader()
might have set */
xlogreader->errormsg_buf[0] = '\0';
goto next_record_is_invalid;
}... but, even though there was a (suppressed) error, nothing
invalidates the reader's page buffer. Normally,
XLogReadValidatePageHeader() failure or any other kind of error
encountered by xlogreader.c'd decoding logic would do that, but here
the read_page callback is directly calling the header validation.
Without prefetching, that doesn't seem to matter, but reading ahead
can cause us to have the problem page in our buffer at the wrong time,
and then not re-read it when we should. Or something like that.The attached patch that simply moves the cache invalidation into
report_invalid_record(), so that it's reached by the above code and
everything else that reports an error, seems to fix the problem in
src/bin/pg_ctl/t/003_promote.pl with Noah's spanner-in-the-works patch
applied, and passes check-world without it. I need to look at this
some more, though, and figure out if it's the right fix.
If WaitForWALToBecomeAvailable returned by promotion, ReadPageInteral
misses the chance to inavlidate reader-state. That state is not an
error while in StandbyMode.
In the repro case, XLogPageRead returns XLREAD_WOULDBLOCK after the
first failure. This situation (of course) was not considered when
that code was introduced. If that function is going to return with
XLREAD_WOULDBLOCK while lastSourceFailed, it should be turned into
XLREAD_FAIL. So, the following also works.
diff --git a/src/backend/access/transam/xlogrecovery.c b/src/backend/access/transam/xlogrecovery.c
index 21088e78f6..9f242fe656 100644
--- a/src/backend/access/transam/xlogrecovery.c
+++ b/src/backend/access/transam/xlogrecovery.c
@@ -3220,7 +3220,9 @@ retry:
xlogreader->nonblocking))
{
case XLREAD_WOULDBLOCK:
- return XLREAD_WOULDBLOCK;
+ if (!lastSourceFailed)
+ return XLREAD_WOULDBLOCK;
+ /* Fall through. */
case XLREAD_FAIL:
if (readFile >= 0)
close(readFile);
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
At Mon, 08 Aug 2022 17:33:22 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in
If WaitForWALToBecomeAvailable returned by promotion, ReadPageInteral
misses the chance to inavlidate reader-state. That state is not an
error while in StandbyMode.
Mmm... Maybe I wanted to say: (Still I'm not sure the rewrite works..)
If WaitForWALToBecomeAvailable returned by promotion, ReadPageInteral
would miss the chance to invalidate reader-state. When XLogPageRead
is called in blocking mode while in StandbyMode (that is, the
traditional condition) , the function continues retrying until it
succeeds, or returns XLRAD_FAIL if promote is triggered. In other
words, it was not supposed to return non-failure while the header
validation is failing while in standby mode. But while in nonblocking
mode, the function can return non-failure with lastSourceFailed =
true, which seems wrong.
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
On Mon, Aug 08, 2022 at 06:15:46PM +1200, Thomas Munro wrote:
The attached patch that simply moves the cache invalidation into
report_invalid_record(), so that it's reached by the above code and
everything else that reports an error, seems to fix the problem in
src/bin/pg_ctl/t/003_promote.pl with Noah's spanner-in-the-works patch
applied, and passes check-world without it. I need to look at this
some more, though, and figure out if it's the right fix.
Thomas, where are you on this open item? A potential PANIC at
promotion is bad. One possible exit path would be to switch the
default of recovery_prefetch, though that's a kind of last-resort
option seen from here.
--
Michael
On Tue, Aug 23, 2022 at 4:21 PM Michael Paquier <michael@paquier.xyz> wrote:
On Mon, Aug 08, 2022 at 06:15:46PM +1200, Thomas Munro wrote:
The attached patch that simply moves the cache invalidation into
report_invalid_record(), so that it's reached by the above code and
everything else that reports an error, seems to fix the problem in
src/bin/pg_ctl/t/003_promote.pl with Noah's spanner-in-the-works patch
applied, and passes check-world without it. I need to look at this
some more, though, and figure out if it's the right fix.Thomas, where are you on this open item? A potential PANIC at
promotion is bad. One possible exit path would be to switch the
default of recovery_prefetch, though that's a kind of last-resort
option seen from here.
I will get a fix committed this week -- I need to study
Horiguchi-san's analysis...
On Tue, Aug 23, 2022 at 12:33 AM Thomas Munro <thomas.munro@gmail.com> wrote:
On Tue, Aug 23, 2022 at 4:21 PM Michael Paquier <michael@paquier.xyz> wrote:
On Mon, Aug 08, 2022 at 06:15:46PM +1200, Thomas Munro wrote:
The attached patch that simply moves the cache invalidation into
report_invalid_record(), so that it's reached by the above code and
everything else that reports an error, seems to fix the problem in
src/bin/pg_ctl/t/003_promote.pl with Noah's spanner-in-the-works patch
applied, and passes check-world without it. I need to look at this
some more, though, and figure out if it's the right fix.Thomas, where are you on this open item? A potential PANIC at
promotion is bad. One possible exit path would be to switch the
default of recovery_prefetch, though that's a kind of last-resort
option seen from here.I will get a fix committed this week -- I need to study
Horiguchi-san's analysis...
Hi!
I haven't been paying attention to this thread, but my attention was
just drawn to it, and I'm wondering if the issue you're trying to
track down here is actually the same as what I reported yesterday
here:
/messages/by-id/CA+TgmoY0Lri=fCueg7m_2R_bSspUb1F8OFycEGaHNJw_EUW-=Q@mail.gmail.com
--
Robert Haas
EDB: http://www.enterprisedb.com
On Wed, Aug 24, 2022 at 3:04 AM Robert Haas <robertmhaas@gmail.com> wrote:
I haven't been paying attention to this thread, but my attention was
just drawn to it, and I'm wondering if the issue you're trying to
track down here is actually the same as what I reported yesterday
here:/messages/by-id/CA+TgmoY0Lri=fCueg7m_2R_bSspUb1F8OFycEGaHNJw_EUW-=Q@mail.gmail.com
Summarising a chat we had about this: Different bug, similar
ingredients. Robert describes a screw-up in what is written, but here
we're talking about a cache invalidation bug while reading.
On Mon, Aug 8, 2022 at 8:56 PM Kyotaro Horiguchi
<horikyota.ntt@gmail.com> wrote:
At Mon, 08 Aug 2022 17:33:22 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in
If WaitForWALToBecomeAvailable returned by promotion, ReadPageInteral
misses the chance to inavlidate reader-state. That state is not an
error while in StandbyMode.Mmm... Maybe I wanted to say: (Still I'm not sure the rewrite works..)
If WaitForWALToBecomeAvailable returned by promotion, ReadPageInteral
would miss the chance to invalidate reader-state. When XLogPageRead
is called in blocking mode while in StandbyMode (that is, the
traditional condition) , the function continues retrying until it
succeeds, or returns XLRAD_FAIL if promote is triggered. In other
words, it was not supposed to return non-failure while the header
validation is failing while in standby mode. But while in nonblocking
mode, the function can return non-failure with lastSourceFailed =
true, which seems wrong.
New ideas:
0001: Instead of figuring out when to invalidate the cache, let's
just invalidate it before every read attempt. It is only marked valid
after success (ie state->readLen > 0). No need to worry about error
cases.
0002: While here, I don't like xlogrecovery.c clobbering
xlogreader.c's internal error state, so I think we should have a
function for that with a documented purpose. It was also a little
inconsistent that it didn't clear a flag (but not buggy AFAICS; kinda
wondering if I should just get rid of that flag, but that's for
another day).
0003: Thinking about your comments above made me realise that I don't
really want XLogReadPage() to be internally retrying for obscure
failures while reading ahead. I think I prefer to give up on
prefetching as soon as anything tricky happens, and deal with
complexities once recovery catches up to that point. I am still
thinking about this point.
Here's the patch set I'm testing.
Attachments:
0001-Fix-cache-invalidation-in-rare-recovery_prefetch-cas.patchtext/x-patch; charset=US-ASCII; name=0001-Fix-cache-invalidation-in-rare-recovery_prefetch-cas.patchDownload
From 9f547088e54fcf279c74fbd03e4afaeffec69ea3 Mon Sep 17 00:00:00 2001
From: Thomas Munro <thomas.munro@gmail.com>
Date: Mon, 29 Aug 2022 19:56:27 +1200
Subject: [PATCH 1/3] Fix cache invalidation in rare recovery_prefetch case.
Since commit 0668719801 we've had an extra page validation and retry
loop inside XLogPageRead() to handle a rare special case. The new
recovery prefetch code from commit 5dc0418f could lead to a PANIC in
this case, because the WAL page cached internally by xlogreader.c was
not invalidated when it should have been.
Make tracking of the cached page's status more robust by invalidating it
just before we attempt any read. Because it is only marked valid when a
read succeeds, now we don't have to worry about invalidation in error
code paths.
The removed line that was setting errormsg_deferred was redundant
because it's already set by report_invalid_record().
Back-patch to 15.
Reported-by: Noah Misch <noah@leadboat.com>
Reviewed-by:
Discussion: https://postgr.es/m/20220807003627.GA4168930%40rfd.leadboat.com
---
src/backend/access/transam/xlogreader.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c
index f17e80948d..e883ade607 100644
--- a/src/backend/access/transam/xlogreader.c
+++ b/src/backend/access/transam/xlogreader.c
@@ -987,6 +987,13 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)
targetPageOff == state->segoff && reqLen <= state->readLen)
return state->readLen;
+ /*
+ * Invalidate contents of internal buffer before read attempt. Just set
+ * the length to 0, rather than a full XLogReaderInvalReadState(), so we
+ * don't forget the segment we last read.
+ */
+ state->readLen = 0;
+
/*
* Data is not in our buffer.
*
@@ -1067,11 +1074,6 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)
return readLen;
err:
- if (state->errormsg_buf[0] != '\0')
- {
- state->errormsg_deferred = true;
- XLogReaderInvalReadState(state);
- }
return XLREAD_FAIL;
}
--
2.30.2
0002-Improve-xlogrecovery.c-xlogreader.c-boundary.patchtext/x-patch; charset=US-ASCII; name=0002-Improve-xlogrecovery.c-xlogreader.c-boundary.patchDownload
From 72c55dfec6729e0f26a9fbf3ff184d3b4308a025 Mon Sep 17 00:00:00 2001
From: Thomas Munro <thomas.munro@gmail.com>
Date: Mon, 29 Aug 2022 21:08:25 +1200
Subject: [PATCH 2/3] Improve xlogrecovery.c/xlogreader.c boundary.
Create a function XLogReaderResetError() to clobber the error message
inside an XLogReaderState, instead of doing it directly from
xlogrecovery.c. This is older code from commit 0668719801, but commit
5dc0418f probably should have tidied this up because it leaves the
internal state a bit out of sync (not a live bug, but a little
confusing).
---
src/backend/access/transam/xlogreader.c | 10 ++++++++++
src/backend/access/transam/xlogrecovery.c | 2 +-
src/include/access/xlogreader.h | 3 +++
3 files changed, 14 insertions(+), 1 deletion(-)
diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c
index e883ade607..c100e18333 100644
--- a/src/backend/access/transam/xlogreader.c
+++ b/src/backend/access/transam/xlogreader.c
@@ -1325,6 +1325,16 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
return true;
}
+/*
+ * Forget about an error produced by XLogReaderValidatePageHeader().
+ */
+void
+XLogReaderResetError(XLogReaderState *state)
+{
+ state->errormsg_buf[0] = '\0';
+ state->errormsg_deferred = false;
+}
+
/*
* Find the first record with an lsn >= RecPtr.
*
diff --git a/src/backend/access/transam/xlogrecovery.c b/src/backend/access/transam/xlogrecovery.c
index a59a0e826b..422322cf5a 100644
--- a/src/backend/access/transam/xlogrecovery.c
+++ b/src/backend/access/transam/xlogrecovery.c
@@ -3335,7 +3335,7 @@ retry:
(errmsg_internal("%s", xlogreader->errormsg_buf)));
/* reset any error XLogReaderValidatePageHeader() might have set */
- xlogreader->errormsg_buf[0] = '\0';
+ XLogReaderResetError(xlogreader);
goto next_record_is_invalid;
}
diff --git a/src/include/access/xlogreader.h b/src/include/access/xlogreader.h
index 87ff00feb7..97b6f00114 100644
--- a/src/include/access/xlogreader.h
+++ b/src/include/access/xlogreader.h
@@ -373,6 +373,9 @@ extern DecodedXLogRecord *XLogReadAhead(XLogReaderState *state,
extern bool XLogReaderValidatePageHeader(XLogReaderState *state,
XLogRecPtr recptr, char *phdr);
+/* Forget error produced by XLogReaderValidatePageHeader(). */
+extern void XLogReaderResetError(XLogReaderState *state);
+
/*
* Error information from WALRead that both backend and frontend caller can
* process. Currently only errors from pread can be reported.
--
2.30.2
0003-Don-t-retry-inside-XLogPageRead-if-reading-ahead.patchtext/x-patch; charset=US-ASCII; name=0003-Don-t-retry-inside-XLogPageRead-if-reading-ahead.patchDownload
From bb8a6d52ca91d463993c3e01a48fc776297b0aeb Mon Sep 17 00:00:00 2001
From: Thomas Munro <thomas.munro@gmail.com>
Date: Mon, 29 Aug 2022 21:14:25 +1200
Subject: [PATCH 3/3] Don't retry inside XLogPageRead() if reading ahead.
Give up immediately if we have a short read or a page header
invalidation failure inside XLogPageRead(). Once recovery catches up to
this point, it can deal with it. This doesn't fix any known live bug,
but it fits with the general philosophy of giving up fast when
prefetching runs into trouble.
---
src/backend/access/transam/xlogrecovery.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/src/backend/access/transam/xlogrecovery.c b/src/backend/access/transam/xlogrecovery.c
index 422322cf5a..76ce9704ca 100644
--- a/src/backend/access/transam/xlogrecovery.c
+++ b/src/backend/access/transam/xlogrecovery.c
@@ -3342,6 +3342,14 @@ retry:
return readLen;
next_record_is_invalid:
+ /*
+ * If we're reading ahead, give up fast. Standby retries and error
+ * reporting will be handled by a later read when recovery catches up to
+ * this point.
+ */
+ if (xlogreader->nonblocking)
+ return XLREAD_WOULDBLOCK;
+
lastSourceFailed = true;
if (readFile >= 0)
--
2.30.2
On 8/29/22 12:21, Thomas Munro wrote:
On Mon, Aug 8, 2022 at 8:56 PM Kyotaro Horiguchi
<horikyota.ntt@gmail.com> wrote:At Mon, 08 Aug 2022 17:33:22 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in
If WaitForWALToBecomeAvailable returned by promotion, ReadPageInteral
misses the chance to inavlidate reader-state. That state is not an
error while in StandbyMode.Mmm... Maybe I wanted to say: (Still I'm not sure the rewrite works..)
If WaitForWALToBecomeAvailable returned by promotion, ReadPageInteral
would miss the chance to invalidate reader-state. When XLogPageRead
is called in blocking mode while in StandbyMode (that is, the
traditional condition) , the function continues retrying until it
succeeds, or returns XLRAD_FAIL if promote is triggered. In other
words, it was not supposed to return non-failure while the header
validation is failing while in standby mode. But while in nonblocking
mode, the function can return non-failure with lastSourceFailed =
true, which seems wrong.New ideas:
0001: Instead of figuring out when to invalidate the cache, let's
just invalidate it before every read attempt. It is only marked valid
after success (ie state->readLen > 0). No need to worry about error
cases.
Maybe I misunderstand how all this works, but won't this have a really
bad performance impact. If not, why do we need the cache at all?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Tue, Aug 30, 2022 at 6:04 AM Tomas Vondra
<tomas.vondra@enterprisedb.com> wrote:
On 8/29/22 12:21, Thomas Munro wrote:
0001: Instead of figuring out when to invalidate the cache, let's
just invalidate it before every read attempt. It is only marked valid
after success (ie state->readLen > 0). No need to worry about error
cases.Maybe I misunderstand how all this works, but won't this have a really
bad performance impact. If not, why do we need the cache at all?
It's a bit confusing because there are several levels of "read". The
cache remains valid as long as the caller of ReadPageInternal() keeps
asking for data that is in range (see early return after comment "/*
check whether we have all the requested data already */"). As soon as
the caller asks for something not in range, this patch marks the cache
invalid before calling the page_read() callback (= XLogPageRead()).
It is only marked valid again after that succeeds. Here's a new
version with no code change, just a better commit message to try to
explain that more clearly.
Attachments:
v2-0001-Fix-cache-invalidation-in-rare-recovery_prefetch-.patchtext/x-patch; charset=US-ASCII; name=v2-0001-Fix-cache-invalidation-in-rare-recovery_prefetch-.patchDownload
From d02d851ab515d4a1f883f657bc43a9cae600a5d8 Mon Sep 17 00:00:00 2001
From: Thomas Munro <thomas.munro@gmail.com>
Date: Mon, 29 Aug 2022 19:56:27 +1200
Subject: [PATCH v2 1/3] Fix cache invalidation in rare recovery_prefetch
cases.
XLogPageRead() can retry internally in rare cases after a read has
succeeded, overwriting the page buffer. Namely, short reads, and page
validation failures while in standby mode (see commit 0668719801). Due
to an oversight in commit 3f1ce973, these cases could leave stale data
in the internal cache of xlogreader.c without marking it invalid. The
main defense against stale cached data was in the error handling path of
ReadPageInternal(), but that wasn't quite enough for errors handled
internally by XLogPageRead()'s retry loop.
Instead of trying to invalidate the cache in the relevant failure cases,
ReadPageInternal() now invalidates it before even calling the page_read
callback (ie XLogPageRead()). It is only marked valid by setting
state->readLen etc after page_read() succeeds. It remains valid until
the caller of ReadPageInternal() eventually asks for something out of
range and we have to go back to page_read() for more.
The removed line that was setting errormsg_deferred was redundant
because it's already set by report_invalid_record().
Back-patch to 15.
Reported-by: Noah Misch <noah@leadboat.com>
Reviewed-by:
Discussion: https://postgr.es/m/20220807003627.GA4168930%40rfd.leadboat.com
---
src/backend/access/transam/xlogreader.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c
index f17e80948d..e883ade607 100644
--- a/src/backend/access/transam/xlogreader.c
+++ b/src/backend/access/transam/xlogreader.c
@@ -987,6 +987,13 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)
targetPageOff == state->segoff && reqLen <= state->readLen)
return state->readLen;
+ /*
+ * Invalidate contents of internal buffer before read attempt. Just set
+ * the length to 0, rather than a full XLogReaderInvalReadState(), so we
+ * don't forget the segment we last read.
+ */
+ state->readLen = 0;
+
/*
* Data is not in our buffer.
*
@@ -1067,11 +1074,6 @@ ReadPageInternal(XLogReaderState *state, XLogRecPtr pageptr, int reqLen)
return readLen;
err:
- if (state->errormsg_buf[0] != '\0')
- {
- state->errormsg_deferred = true;
- XLogReaderInvalReadState(state);
- }
return XLREAD_FAIL;
}
--
2.30.2
v2-0002-Improve-xlogrecovery.c-xlogreader.c-boundary.patchtext/x-patch; charset=US-ASCII; name=v2-0002-Improve-xlogrecovery.c-xlogreader.c-boundary.patchDownload
From 6439f5ed5b9cb6524c307f06a43c9c54f51d3b86 Mon Sep 17 00:00:00 2001
From: Thomas Munro <thomas.munro@gmail.com>
Date: Mon, 29 Aug 2022 21:08:25 +1200
Subject: [PATCH v2 2/3] Improve xlogrecovery.c/xlogreader.c boundary.
Create a function XLogReaderResetError() to clobber the error message
inside an XLogReaderState, instead of doing it directly from
xlogrecovery.c. This is older code from commit 0668719801, but commit
5dc0418f probably should have tidied this up because it leaves the
internal state a bit out of sync (not a live bug, but a little
confusing).
---
src/backend/access/transam/xlogreader.c | 10 ++++++++++
src/backend/access/transam/xlogrecovery.c | 2 +-
src/include/access/xlogreader.h | 3 +++
3 files changed, 14 insertions(+), 1 deletion(-)
diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c
index e883ade607..c100e18333 100644
--- a/src/backend/access/transam/xlogreader.c
+++ b/src/backend/access/transam/xlogreader.c
@@ -1325,6 +1325,16 @@ XLogReaderValidatePageHeader(XLogReaderState *state, XLogRecPtr recptr,
return true;
}
+/*
+ * Forget about an error produced by XLogReaderValidatePageHeader().
+ */
+void
+XLogReaderResetError(XLogReaderState *state)
+{
+ state->errormsg_buf[0] = '\0';
+ state->errormsg_deferred = false;
+}
+
/*
* Find the first record with an lsn >= RecPtr.
*
diff --git a/src/backend/access/transam/xlogrecovery.c b/src/backend/access/transam/xlogrecovery.c
index a59a0e826b..422322cf5a 100644
--- a/src/backend/access/transam/xlogrecovery.c
+++ b/src/backend/access/transam/xlogrecovery.c
@@ -3335,7 +3335,7 @@ retry:
(errmsg_internal("%s", xlogreader->errormsg_buf)));
/* reset any error XLogReaderValidatePageHeader() might have set */
- xlogreader->errormsg_buf[0] = '\0';
+ XLogReaderResetError(xlogreader);
goto next_record_is_invalid;
}
diff --git a/src/include/access/xlogreader.h b/src/include/access/xlogreader.h
index 87ff00feb7..97b6f00114 100644
--- a/src/include/access/xlogreader.h
+++ b/src/include/access/xlogreader.h
@@ -373,6 +373,9 @@ extern DecodedXLogRecord *XLogReadAhead(XLogReaderState *state,
extern bool XLogReaderValidatePageHeader(XLogReaderState *state,
XLogRecPtr recptr, char *phdr);
+/* Forget error produced by XLogReaderValidatePageHeader(). */
+extern void XLogReaderResetError(XLogReaderState *state);
+
/*
* Error information from WALRead that both backend and frontend caller can
* process. Currently only errors from pread can be reported.
--
2.30.2
v2-0003-Don-t-retry-inside-XLogPageRead-if-reading-ahead.patchtext/x-patch; charset=US-ASCII; name=v2-0003-Don-t-retry-inside-XLogPageRead-if-reading-ahead.patchDownload
From 394d2b08a896a7e3094ffb052adaa11486b7b14f Mon Sep 17 00:00:00 2001
From: Thomas Munro <thomas.munro@gmail.com>
Date: Mon, 29 Aug 2022 21:14:25 +1200
Subject: [PATCH v2 3/3] Don't retry inside XLogPageRead() if reading ahead.
Give up immediately if we have a short read or a page header
invalidation failure inside XLogPageRead(). Once recovery catches up to
this point, it can deal with it. This doesn't fix any known live bug,
but it fits with the general philosophy of giving up fast when
prefetching runs into trouble.
---
src/backend/access/transam/xlogrecovery.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/src/backend/access/transam/xlogrecovery.c b/src/backend/access/transam/xlogrecovery.c
index 422322cf5a..76ce9704ca 100644
--- a/src/backend/access/transam/xlogrecovery.c
+++ b/src/backend/access/transam/xlogrecovery.c
@@ -3342,6 +3342,14 @@ retry:
return readLen;
next_record_is_invalid:
+ /*
+ * If we're reading ahead, give up fast. Standby retries and error
+ * reporting will be handled by a later read when recovery catches up to
+ * this point.
+ */
+ if (xlogreader->nonblocking)
+ return XLREAD_WOULDBLOCK;
+
lastSourceFailed = true;
if (readFile >= 0)
--
2.30.2